Jan 23 18:49:10.952591 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 18:49:10.952609 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:49:10.952616 kernel: BIOS-provided physical RAM map: Jan 23 18:49:10.952621 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:49:10.952629 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Jan 23 18:49:10.952634 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 23 18:49:10.952639 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 23 18:49:10.952644 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 18:49:10.952648 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 18:49:10.952653 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 18:49:10.952658 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 23 18:49:10.952663 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 23 18:49:10.952667 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:49:10.952674 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:49:10.952680 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:49:10.952685 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 23 18:49:10.952690 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:49:10.952694 kernel: NX (Execute Disable) protection: active Jan 23 18:49:10.952702 kernel: APIC: Static calls initialized Jan 23 18:49:10.952707 kernel: e820: update [mem 0x7dfae018-0x7dfb7a57] usable ==> usable Jan 23 18:49:10.952712 kernel: e820: update [mem 0x7df72018-0x7dfad657] usable ==> usable Jan 23 18:49:10.952717 kernel: e820: update [mem 0x7df36018-0x7df71657] usable ==> usable Jan 23 18:49:10.952722 kernel: extended physical RAM map: Jan 23 18:49:10.952726 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:49:10.952731 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000007df36017] usable Jan 23 18:49:10.952736 kernel: reserve setup_data: [mem 0x000000007df36018-0x000000007df71657] usable Jan 23 18:49:10.952741 kernel: reserve setup_data: [mem 0x000000007df71658-0x000000007df72017] usable Jan 23 18:49:10.952746 kernel: reserve setup_data: [mem 0x000000007df72018-0x000000007dfad657] usable Jan 23 18:49:10.952751 kernel: reserve setup_data: [mem 0x000000007dfad658-0x000000007dfae017] usable Jan 23 18:49:10.952758 kernel: reserve setup_data: [mem 0x000000007dfae018-0x000000007dfb7a57] usable Jan 23 18:49:10.952763 kernel: reserve setup_data: [mem 0x000000007dfb7a58-0x000000007ed3efff] usable Jan 23 18:49:10.952768 kernel: reserve setup_data: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 23 18:49:10.952773 kernel: reserve setup_data: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 23 18:49:10.952778 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 18:49:10.952782 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 18:49:10.952787 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 18:49:10.952792 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 23 18:49:10.952797 kernel: reserve setup_data: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 23 18:49:10.952802 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 18:49:10.952807 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 18:49:10.952817 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 18:49:10.952822 kernel: reserve setup_data: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 23 18:49:10.952827 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 18:49:10.952832 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 23 18:49:10.952837 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e845198 RNG=0x7fb73018 Jan 23 18:49:10.952844 kernel: random: crng init done Jan 23 18:49:10.952850 kernel: efi: Remove mem138: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 18:49:10.952855 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 18:49:10.952860 kernel: secureboot: Secure boot disabled Jan 23 18:49:10.952865 kernel: SMBIOS 3.0.0 present. Jan 23 18:49:10.952870 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 23 18:49:10.952875 kernel: DMI: Memory slots populated: 1/1 Jan 23 18:49:10.952880 kernel: Hypervisor detected: KVM Jan 23 18:49:10.952885 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 23 18:49:10.952890 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 18:49:10.952895 kernel: kvm-clock: using sched offset of 13301910709 cycles Jan 23 18:49:10.952902 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 18:49:10.952908 kernel: tsc: Detected 2399.998 MHz processor Jan 23 18:49:10.952913 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:49:10.952918 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:49:10.952923 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Jan 23 18:49:10.952929 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:49:10.952934 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:49:10.952939 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 23 18:49:10.952944 kernel: Using GB pages for direct mapping Jan 23 18:49:10.952951 kernel: ACPI: Early table checksum verification disabled Jan 23 18:49:10.952956 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 23 18:49:10.952962 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 23 18:49:10.952967 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:10.952972 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:10.952977 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 23 18:49:10.952982 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:10.952988 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:10.952993 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:10.953001 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 18:49:10.953006 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 18:49:10.953017 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Jan 23 18:49:10.953022 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Jan 23 18:49:10.953027 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 23 18:49:10.953032 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Jan 23 18:49:10.953038 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Jan 23 18:49:10.953043 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Jan 23 18:49:10.953048 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Jan 23 18:49:10.953057 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Jan 23 18:49:10.953062 kernel: No NUMA configuration found Jan 23 18:49:10.953067 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Jan 23 18:49:10.953072 kernel: NODE_DATA(0) allocated [mem 0x179ff8dc0-0x179ffffff] Jan 23 18:49:10.953078 kernel: Zone ranges: Jan 23 18:49:10.953083 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:49:10.953088 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:49:10.953093 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Jan 23 18:49:10.953098 kernel: Device empty Jan 23 18:49:10.953103 kernel: Movable zone start for each node Jan 23 18:49:10.953111 kernel: Early memory node ranges Jan 23 18:49:10.953116 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:49:10.953121 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Jan 23 18:49:10.953126 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Jan 23 18:49:10.953131 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Jan 23 18:49:10.953136 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Jan 23 18:49:10.953141 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Jan 23 18:49:10.953147 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:49:10.953152 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:49:10.953159 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 18:49:10.953164 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 18:49:10.953170 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Jan 23 18:49:10.953175 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 23 18:49:10.953238 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 18:49:10.953243 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 18:49:10.953248 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:49:10.953254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 18:49:10.953259 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 18:49:10.953267 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:49:10.953272 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 18:49:10.953277 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 18:49:10.953282 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:49:10.953287 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 18:49:10.953293 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:49:10.953298 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:49:10.953313 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:49:10.953318 kernel: CPU topo: Max. threads per core: 1 Jan 23 18:49:10.953324 kernel: CPU topo: Num. cores per package: 2 Jan 23 18:49:10.953329 kernel: CPU topo: Num. threads per package: 2 Jan 23 18:49:10.953334 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:49:10.953343 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 18:49:10.953348 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 23 18:49:10.953353 kernel: Booting paravirtualized kernel on KVM Jan 23 18:49:10.953359 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:49:10.953364 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:49:10.953373 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:49:10.953378 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:49:10.953384 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:49:10.953389 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 23 18:49:10.953395 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:49:10.953400 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:49:10.953406 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 18:49:10.953411 kernel: Fallback order for Node 0: 0 Jan 23 18:49:10.953419 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1022792 Jan 23 18:49:10.953424 kernel: Policy zone: Normal Jan 23 18:49:10.953430 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:49:10.953435 kernel: software IO TLB: area num 2. Jan 23 18:49:10.953440 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:49:10.953446 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:49:10.953451 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:49:10.953456 kernel: Dynamic Preempt: voluntary Jan 23 18:49:10.953462 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:49:10.953470 kernel: rcu: RCU event tracing is enabled. Jan 23 18:49:10.953476 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:49:10.953482 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:49:10.953487 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:49:10.953493 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:49:10.953498 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:49:10.953504 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:49:10.953509 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:49:10.953515 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:49:10.953520 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:49:10.953529 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 18:49:10.953535 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:49:10.953540 kernel: Console: colour dummy device 80x25 Jan 23 18:49:10.953546 kernel: printk: legacy console [tty0] enabled Jan 23 18:49:10.953551 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:49:10.953556 kernel: ACPI: Core revision 20240827 Jan 23 18:49:10.953562 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 18:49:10.953567 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:49:10.953573 kernel: x2apic enabled Jan 23 18:49:10.953581 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:49:10.953586 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 18:49:10.953591 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x229835b7123, max_idle_ns: 440795242976 ns Jan 23 18:49:10.953597 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Jan 23 18:49:10.953602 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:49:10.953608 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 18:49:10.953613 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 18:49:10.953619 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:49:10.953627 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 23 18:49:10.953632 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 18:49:10.953638 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 18:49:10.953643 kernel: active return thunk: srso_alias_return_thunk Jan 23 18:49:10.953648 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Jan 23 18:49:10.953654 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 18:49:10.953659 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 18:49:10.953665 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:49:10.953670 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:49:10.953678 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:49:10.953683 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 18:49:10.953688 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 18:49:10.953694 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 18:49:10.953699 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 18:49:10.953705 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:49:10.953710 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 18:49:10.953715 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 18:49:10.953721 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 18:49:10.953729 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 23 18:49:10.953735 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 23 18:49:10.953741 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:49:10.953747 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:49:10.953752 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:49:10.953758 kernel: landlock: Up and running. Jan 23 18:49:10.953764 kernel: SELinux: Initializing. Jan 23 18:49:10.953770 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:49:10.953775 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 18:49:10.953783 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Jan 23 18:49:10.953788 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 23 18:49:10.953794 kernel: ... version: 0 Jan 23 18:49:10.953799 kernel: ... bit width: 48 Jan 23 18:49:10.953804 kernel: ... generic registers: 6 Jan 23 18:49:10.953810 kernel: ... value mask: 0000ffffffffffff Jan 23 18:49:10.953815 kernel: ... max period: 00007fffffffffff Jan 23 18:49:10.953820 kernel: ... fixed-purpose events: 0 Jan 23 18:49:10.953826 kernel: ... event mask: 000000000000003f Jan 23 18:49:10.953833 kernel: signal: max sigframe size: 3376 Jan 23 18:49:10.953839 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:49:10.953844 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:49:10.953850 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:49:10.953855 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:49:10.953861 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:49:10.953866 kernel: .... node #0, CPUs: #1 Jan 23 18:49:10.953871 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:49:10.953877 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Jan 23 18:49:10.953884 kernel: Memory: 3848520K/4091168K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 237016K reserved, 0K cma-reserved) Jan 23 18:49:10.953890 kernel: devtmpfs: initialized Jan 23 18:49:10.953895 kernel: x86/mm: Memory block size: 128MB Jan 23 18:49:10.953901 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 23 18:49:10.953906 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:49:10.953911 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:49:10.953917 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:49:10.953922 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:49:10.953928 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:49:10.953935 kernel: audit: type=2000 audit(1769194147.551:1): state=initialized audit_enabled=0 res=1 Jan 23 18:49:10.953941 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:49:10.953946 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:49:10.953952 kernel: cpuidle: using governor menu Jan 23 18:49:10.953957 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:49:10.953962 kernel: dca service started, version 1.12.1 Jan 23 18:49:10.953968 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 18:49:10.953973 kernel: PCI: Using configuration type 1 for base access Jan 23 18:49:10.953979 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:49:10.953986 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:49:10.953992 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:49:10.953997 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:49:10.954003 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:49:10.954008 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:49:10.954020 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:49:10.954025 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:49:10.954030 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:49:10.954036 kernel: ACPI: Interpreter enabled Jan 23 18:49:10.954044 kernel: ACPI: PM: (supports S0 S5) Jan 23 18:49:10.954050 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:49:10.954055 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:49:10.954061 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 18:49:10.954066 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 18:49:10.954071 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 18:49:10.956620 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 18:49:10.956732 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 18:49:10.956840 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 18:49:10.956846 kernel: PCI host bridge to bus 0000:00 Jan 23 18:49:10.956948 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 18:49:10.957046 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 18:49:10.957135 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 18:49:10.957242 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 23 18:49:10.957330 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 18:49:10.957421 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Jan 23 18:49:10.957509 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 18:49:10.957618 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 18:49:10.957725 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 23 18:49:10.957823 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 23 18:49:10.957920 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc060500000-0xc060503fff 64bit pref] Jan 23 18:49:10.958027 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8138a000-0x8138afff] Jan 23 18:49:10.958125 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 18:49:10.960257 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 18:49:10.960372 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:10.960473 kernel: pci 0000:00:02.0: BAR 0 [mem 0x81389000-0x81389fff] Jan 23 18:49:10.960572 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 18:49:10.960668 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 23 18:49:10.960769 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 23 18:49:10.960871 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:10.960966 kernel: pci 0000:00:02.1: BAR 0 [mem 0x81388000-0x81388fff] Jan 23 18:49:10.961068 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 18:49:10.961164 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 23 18:49:10.961291 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:10.961411 kernel: pci 0000:00:02.2: BAR 0 [mem 0x81387000-0x81387fff] Jan 23 18:49:10.961508 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 18:49:10.961604 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 23 18:49:10.961699 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 23 18:49:10.961801 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:10.961896 kernel: pci 0000:00:02.3: BAR 0 [mem 0x81386000-0x81386fff] Jan 23 18:49:10.961991 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 18:49:10.962105 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 23 18:49:10.964935 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:10.965051 kernel: pci 0000:00:02.4: BAR 0 [mem 0x81385000-0x81385fff] Jan 23 18:49:10.965149 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 18:49:10.965258 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 23 18:49:10.965356 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 23 18:49:10.965462 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:10.965558 kernel: pci 0000:00:02.5: BAR 0 [mem 0x81384000-0x81384fff] Jan 23 18:49:10.965658 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 18:49:10.965752 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 23 18:49:10.965847 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 23 18:49:10.965948 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:10.966053 kernel: pci 0000:00:02.6: BAR 0 [mem 0x81383000-0x81383fff] Jan 23 18:49:10.966149 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 18:49:10.966257 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 18:49:10.966356 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 23 18:49:10.966456 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:10.966551 kernel: pci 0000:00:02.7: BAR 0 [mem 0x81382000-0x81382fff] Jan 23 18:49:10.966645 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 18:49:10.966740 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 18:49:10.966834 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 23 18:49:10.966941 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 18:49:10.967053 kernel: pci 0000:00:03.0: BAR 0 [mem 0x81381000-0x81381fff] Jan 23 18:49:10.967151 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 18:49:10.967275 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 23 18:49:10.967371 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 23 18:49:10.967475 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 18:49:10.967571 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 18:49:10.967674 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 18:49:10.967768 kernel: pci 0000:00:1f.2: BAR 4 [io 0x6040-0x605f] Jan 23 18:49:10.967863 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x81380000-0x81380fff] Jan 23 18:49:10.967966 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 18:49:10.968071 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6000-0x603f] Jan 23 18:49:10.968193 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 18:49:10.968300 kernel: pci 0000:01:00.0: BAR 1 [mem 0x81200000-0x81200fff] Jan 23 18:49:10.968400 kernel: pci 0000:01:00.0: BAR 4 [mem 0xc060000000-0xc060003fff 64bit pref] Jan 23 18:49:10.968500 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 18:49:10.968596 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 18:49:10.968704 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 18:49:10.968805 kernel: pci 0000:02:00.0: BAR 0 [mem 0x81100000-0x81103fff 64bit] Jan 23 18:49:10.968901 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 18:49:10.969007 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 23 18:49:10.969125 kernel: pci 0000:03:00.0: BAR 1 [mem 0x81000000-0x81000fff] Jan 23 18:49:10.971707 kernel: pci 0000:03:00.0: BAR 4 [mem 0xc060100000-0xc060103fff 64bit pref] Jan 23 18:49:10.971820 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 18:49:10.971932 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 18:49:10.972050 kernel: pci 0000:04:00.0: BAR 4 [mem 0xc060200000-0xc060203fff 64bit pref] Jan 23 18:49:10.972150 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 18:49:10.972280 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 18:49:10.972383 kernel: pci 0000:05:00.0: BAR 1 [mem 0x80f00000-0x80f00fff] Jan 23 18:49:10.972483 kernel: pci 0000:05:00.0: BAR 4 [mem 0xc060300000-0xc060303fff 64bit pref] Jan 23 18:49:10.972578 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 18:49:10.972686 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 23 18:49:10.972788 kernel: pci 0000:06:00.0: BAR 1 [mem 0x80e00000-0x80e00fff] Jan 23 18:49:10.972889 kernel: pci 0000:06:00.0: BAR 4 [mem 0xc060400000-0xc060403fff 64bit pref] Jan 23 18:49:10.972990 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 18:49:10.972998 kernel: acpiphp: Slot [0] registered Jan 23 18:49:10.973115 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 18:49:10.973232 kernel: pci 0000:07:00.0: BAR 1 [mem 0x80c00000-0x80c00fff] Jan 23 18:49:10.973333 kernel: pci 0000:07:00.0: BAR 4 [mem 0xc000000000-0xc000003fff 64bit pref] Jan 23 18:49:10.973432 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 18:49:10.973528 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 18:49:10.973537 kernel: acpiphp: Slot [0-2] registered Jan 23 18:49:10.973632 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 18:49:10.973639 kernel: acpiphp: Slot [0-3] registered Jan 23 18:49:10.973733 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 18:49:10.973743 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 18:49:10.973764 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 18:49:10.973772 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 18:49:10.973778 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 18:49:10.973786 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 18:49:10.973792 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 18:49:10.973797 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 18:49:10.973803 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 18:49:10.973808 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 18:49:10.973814 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 18:49:10.973820 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 18:49:10.973825 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 18:49:10.973831 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 18:49:10.973838 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 18:49:10.973844 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 18:49:10.973852 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 18:49:10.973857 kernel: iommu: Default domain type: Translated Jan 23 18:49:10.973863 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:49:10.973869 kernel: efivars: Registered efivars operations Jan 23 18:49:10.973876 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:49:10.973882 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 18:49:10.973888 kernel: e820: reserve RAM buffer [mem 0x7df36018-0x7fffffff] Jan 23 18:49:10.973894 kernel: e820: reserve RAM buffer [mem 0x7df72018-0x7fffffff] Jan 23 18:49:10.973899 kernel: e820: reserve RAM buffer [mem 0x7dfae018-0x7fffffff] Jan 23 18:49:10.973905 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Jan 23 18:49:10.973911 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 23 18:49:10.973916 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Jan 23 18:49:10.973924 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Jan 23 18:49:10.974030 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 18:49:10.974127 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 18:49:10.974235 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 18:49:10.974242 kernel: vgaarb: loaded Jan 23 18:49:10.974248 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 18:49:10.974254 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 18:49:10.974260 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 18:49:10.974266 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:49:10.974280 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:49:10.974286 kernel: pnp: PnP ACPI init Jan 23 18:49:10.974390 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 18:49:10.974398 kernel: pnp: PnP ACPI: found 5 devices Jan 23 18:49:10.974404 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:49:10.974409 kernel: NET: Registered PF_INET protocol family Jan 23 18:49:10.974415 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:49:10.974421 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 18:49:10.974429 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:49:10.974435 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 18:49:10.974441 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 18:49:10.974446 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 18:49:10.974452 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:49:10.974458 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 18:49:10.974464 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:49:10.974469 kernel: NET: Registered PF_XDP protocol family Jan 23 18:49:10.974572 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 18:49:10.974707 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 18:49:10.974807 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 18:49:10.974903 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 18:49:10.975000 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 18:49:10.975105 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Jan 23 18:49:10.975263 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Jan 23 18:49:10.975362 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Jan 23 18:49:10.975464 kernel: pci 0000:01:00.0: ROM [mem 0x81280000-0x812fffff pref]: assigned Jan 23 18:49:10.975564 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 18:49:10.975660 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 23 18:49:10.975757 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 23 18:49:10.975853 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 18:49:10.975949 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 23 18:49:10.976054 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 18:49:10.976150 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 23 18:49:10.976261 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 23 18:49:10.976358 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 18:49:10.976457 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 23 18:49:10.976553 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 18:49:10.976648 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 23 18:49:10.976742 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 23 18:49:10.976837 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 18:49:10.976933 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 23 18:49:10.977041 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 23 18:49:10.977146 kernel: pci 0000:07:00.0: ROM [mem 0x80c80000-0x80cfffff pref]: assigned Jan 23 18:49:10.977256 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 18:49:10.977355 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 23 18:49:10.977449 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 18:49:10.977546 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 23 18:49:10.977641 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 18:49:10.977736 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 23 18:49:10.977834 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 18:49:10.977928 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 23 18:49:10.978033 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 18:49:10.978131 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 23 18:49:10.978237 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 23 18:49:10.978333 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 23 18:49:10.978433 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 18:49:10.978522 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 18:49:10.978610 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 18:49:10.978703 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 23 18:49:10.978792 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 18:49:10.978884 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Jan 23 18:49:10.978985 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Jan 23 18:49:10.979089 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 23 18:49:10.980274 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Jan 23 18:49:10.980394 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Jan 23 18:49:10.980492 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 23 18:49:10.980592 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 23 18:49:10.980692 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Jan 23 18:49:10.980786 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 23 18:49:10.980889 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Jan 23 18:49:10.980983 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 23 18:49:10.981100 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 23 18:49:10.981209 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Jan 23 18:49:10.981306 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 23 18:49:10.981405 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 23 18:49:10.981500 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Jan 23 18:49:10.981594 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 23 18:49:10.981697 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 23 18:49:10.981794 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Jan 23 18:49:10.981887 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 23 18:49:10.981895 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 18:49:10.981901 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:49:10.981907 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:49:10.981913 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Jan 23 18:49:10.981919 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x229835b7123, max_idle_ns: 440795242976 ns Jan 23 18:49:10.981927 kernel: Initialise system trusted keyrings Jan 23 18:49:10.981933 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 18:49:10.981939 kernel: Key type asymmetric registered Jan 23 18:49:10.981944 kernel: Asymmetric key parser 'x509' registered Jan 23 18:49:10.981950 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:49:10.981955 kernel: io scheduler mq-deadline registered Jan 23 18:49:10.981961 kernel: io scheduler kyber registered Jan 23 18:49:10.981967 kernel: io scheduler bfq registered Jan 23 18:49:10.982076 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 18:49:10.982177 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 18:49:10.982297 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 18:49:10.982394 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 18:49:10.982490 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 18:49:10.982586 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 18:49:10.982682 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 18:49:10.982778 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 18:49:10.982874 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 18:49:10.982972 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 18:49:10.983078 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 18:49:10.983175 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 18:49:10.983906 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 18:49:10.984016 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 18:49:10.984118 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 18:49:10.984232 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 18:49:10.984244 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 18:49:10.984340 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 23 18:49:10.984437 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 23 18:49:10.984444 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:49:10.984450 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 23 18:49:10.984456 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:49:10.984462 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:49:10.984470 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 18:49:10.984476 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 18:49:10.984482 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 18:49:10.984585 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 18:49:10.984593 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 18:49:10.984684 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 18:49:10.984775 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T18:49:10 UTC (1769194150) Jan 23 18:49:10.984866 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 18:49:10.984876 kernel: amd_pstate: The CPPC feature is supported but currently disabled by the BIOS. Please enable it if your BIOS has the CPPC option. Jan 23 18:49:10.984883 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 18:49:10.984889 kernel: efifb: probing for efifb Jan 23 18:49:10.984895 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 23 18:49:10.984901 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 18:49:10.984906 kernel: efifb: scrolling: redraw Jan 23 18:49:10.984912 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:49:10.984918 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:49:10.984924 kernel: fb0: EFI VGA frame buffer device Jan 23 18:49:10.984932 kernel: pstore: Using crash dump compression: deflate Jan 23 18:49:10.984938 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:49:10.984944 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:49:10.984949 kernel: Segment Routing with IPv6 Jan 23 18:49:10.984955 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:49:10.984961 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:49:10.984966 kernel: Key type dns_resolver registered Jan 23 18:49:10.984972 kernel: IPI shorthand broadcast: enabled Jan 23 18:49:10.984978 kernel: sched_clock: Marking stable (3053012191, 233356639)->(3307386195, -21017365) Jan 23 18:49:10.984986 kernel: registered taskstats version 1 Jan 23 18:49:10.984992 kernel: Loading compiled-in X.509 certificates Jan 23 18:49:10.984998 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 18:49:10.985004 kernel: Demotion targets for Node 0: null Jan 23 18:49:10.985017 kernel: Key type .fscrypt registered Jan 23 18:49:10.985023 kernel: Key type fscrypt-provisioning registered Jan 23 18:49:10.985029 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:49:10.985035 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:49:10.985041 kernel: ima: No architecture policies found Jan 23 18:49:10.985049 kernel: clk: Disabling unused clocks Jan 23 18:49:10.985055 kernel: Warning: unable to open an initial console. Jan 23 18:49:10.985061 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 18:49:10.985067 kernel: Write protecting the kernel read-only data: 40960k Jan 23 18:49:10.985073 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 18:49:10.985080 kernel: Run /init as init process Jan 23 18:49:10.985085 kernel: with arguments: Jan 23 18:49:10.985091 kernel: /init Jan 23 18:49:10.985097 kernel: with environment: Jan 23 18:49:10.985105 kernel: HOME=/ Jan 23 18:49:10.985111 kernel: TERM=linux Jan 23 18:49:10.985118 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:49:10.985126 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:49:10.985133 systemd[1]: Detected virtualization kvm. Jan 23 18:49:10.985138 systemd[1]: Detected architecture x86-64. Jan 23 18:49:10.985144 systemd[1]: Running in initrd. Jan 23 18:49:10.985152 systemd[1]: No hostname configured, using default hostname. Jan 23 18:49:10.985158 systemd[1]: Hostname set to . Jan 23 18:49:10.985164 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:49:10.985171 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:49:10.985177 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:49:10.985200 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:49:10.985207 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:49:10.985213 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:49:10.985221 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:49:10.985228 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:49:10.985235 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 18:49:10.985241 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 18:49:10.985247 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:49:10.985253 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:49:10.985259 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:49:10.985267 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:49:10.985274 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:49:10.985280 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:49:10.985286 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:49:10.985292 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:49:10.985298 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:49:10.985304 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:49:10.985310 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:49:10.985316 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:49:10.985324 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:49:10.985330 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:49:10.985336 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:49:10.985342 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:49:10.985348 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:49:10.985355 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:49:10.985361 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:49:10.985367 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:49:10.985375 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:49:10.985381 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:10.985406 systemd-journald[199]: Collecting audit messages is disabled. Jan 23 18:49:10.985421 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:49:10.985430 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:49:10.985436 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:49:10.985443 systemd-journald[199]: Journal started Jan 23 18:49:10.985459 systemd-journald[199]: Runtime Journal (/run/log/journal/482b9403240e4b7da1d5bedff195648d) is 8M, max 76.1M, 68.1M free. Jan 23 18:49:10.970037 systemd-modules-load[200]: Inserted module 'overlay' Jan 23 18:49:10.991236 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:49:10.995206 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:49:10.996609 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:11.001495 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:49:11.002756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:49:11.008250 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:49:11.012990 systemd-modules-load[200]: Inserted module 'br_netfilter' Jan 23 18:49:11.013470 kernel: Bridge firewalling registered Jan 23 18:49:11.018928 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:49:11.019974 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:49:11.026137 systemd-tmpfiles[214]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:49:11.026243 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:49:11.030362 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:49:11.036320 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:49:11.037479 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:49:11.042353 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:49:11.045778 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:49:11.047429 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:49:11.053281 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:49:11.062653 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:49:11.091951 systemd-resolved[239]: Positive Trust Anchors: Jan 23 18:49:11.092602 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:49:11.092993 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:49:11.096598 systemd-resolved[239]: Defaulting to hostname 'linux'. Jan 23 18:49:11.097855 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:49:11.098323 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:49:11.141236 kernel: SCSI subsystem initialized Jan 23 18:49:11.148204 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:49:11.165244 kernel: iscsi: registered transport (tcp) Jan 23 18:49:11.192359 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:49:11.192442 kernel: QLogic iSCSI HBA Driver Jan 23 18:49:11.220463 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:49:11.252999 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:49:11.254707 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:49:11.334430 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:49:11.338315 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:49:11.392231 kernel: raid6: avx512x4 gen() 44711 MB/s Jan 23 18:49:11.410235 kernel: raid6: avx512x2 gen() 47112 MB/s Jan 23 18:49:11.428223 kernel: raid6: avx512x1 gen() 43891 MB/s Jan 23 18:49:11.446219 kernel: raid6: avx2x4 gen() 46917 MB/s Jan 23 18:49:11.464232 kernel: raid6: avx2x2 gen() 49749 MB/s Jan 23 18:49:11.482945 kernel: raid6: avx2x1 gen() 39779 MB/s Jan 23 18:49:11.483042 kernel: raid6: using algorithm avx2x2 gen() 49749 MB/s Jan 23 18:49:11.501982 kernel: raid6: .... xor() 36841 MB/s, rmw enabled Jan 23 18:49:11.502048 kernel: raid6: using avx512x2 recovery algorithm Jan 23 18:49:11.518235 kernel: xor: automatically using best checksumming function avx Jan 23 18:49:11.682218 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:49:11.691774 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:49:11.696330 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:49:11.717650 systemd-udevd[448]: Using default interface naming scheme 'v255'. Jan 23 18:49:11.722758 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:49:11.727936 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:49:11.759858 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Jan 23 18:49:11.797975 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:49:11.799573 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:49:11.914135 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:49:11.920385 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:49:12.027204 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:49:12.050223 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 23 18:49:12.060202 kernel: scsi host0: Virtio SCSI HBA Jan 23 18:49:12.060764 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:49:12.060851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:12.064568 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:12.065880 kernel: AES CTR mode by8 optimization enabled Jan 23 18:49:12.070223 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 18:49:12.072939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:12.087236 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:49:12.098841 kernel: libata version 3.00 loaded. Jan 23 18:49:12.103861 kernel: ACPI: bus type USB registered Jan 23 18:49:12.103880 kernel: usbcore: registered new interface driver usbfs Jan 23 18:49:12.103889 kernel: usbcore: registered new interface driver hub Jan 23 18:49:12.107371 kernel: usbcore: registered new device driver usb Jan 23 18:49:12.113702 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:49:12.114159 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:12.115981 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:12.137036 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:12.140033 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 23 18:49:12.145489 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 23 18:49:12.145678 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Jan 23 18:49:12.149251 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 18:49:12.149410 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 23 18:49:12.149535 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 18:49:12.154298 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 18:49:12.154461 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 18:49:12.160145 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 18:49:12.160303 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 18:49:12.160419 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 18:49:12.169993 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 18:49:12.170046 kernel: GPT:17805311 != 160006143 Jan 23 18:49:12.170056 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 18:49:12.170064 kernel: GPT:17805311 != 160006143 Jan 23 18:49:12.170079 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 18:49:12.170087 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:49:12.170562 kernel: scsi host1: ahci Jan 23 18:49:12.170735 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 18:49:12.174518 kernel: scsi host2: ahci Jan 23 18:49:12.176196 kernel: scsi host3: ahci Jan 23 18:49:12.179316 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 18:49:12.179483 kernel: scsi host4: ahci Jan 23 18:49:12.179507 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 23 18:49:12.182304 kernel: scsi host5: ahci Jan 23 18:49:12.182339 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 18:49:12.185538 kernel: scsi host6: ahci Jan 23 18:49:12.186201 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 18:49:12.194016 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 lpm-pol 1 Jan 23 18:49:12.194045 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 23 18:49:12.194210 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 lpm-pol 1 Jan 23 18:49:12.194220 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 23 18:49:12.194343 kernel: hub 1-0:1.0: USB hub found Jan 23 18:49:12.194477 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 lpm-pol 1 Jan 23 18:49:12.194485 kernel: hub 1-0:1.0: 4 ports detected Jan 23 18:49:12.194601 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 lpm-pol 1 Jan 23 18:49:12.194609 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 18:49:12.194738 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 lpm-pol 1 Jan 23 18:49:12.194747 kernel: hub 2-0:1.0: USB hub found Jan 23 18:49:12.194872 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 lpm-pol 1 Jan 23 18:49:12.194880 kernel: hub 2-0:1.0: 4 ports detected Jan 23 18:49:12.245773 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 18:49:12.252521 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 18:49:12.258195 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 18:49:12.258878 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 18:49:12.265660 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 18:49:12.267726 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:49:12.283353 disk-uuid[653]: Primary Header is updated. Jan 23 18:49:12.283353 disk-uuid[653]: Secondary Entries is updated. Jan 23 18:49:12.283353 disk-uuid[653]: Secondary Header is updated. Jan 23 18:49:12.297206 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:49:12.430213 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 18:49:12.516217 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:12.516306 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:12.516330 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:12.523240 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:12.530203 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 18:49:12.530231 kernel: ata1.00: LPM support broken, forcing max_power Jan 23 18:49:12.532404 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 18:49:12.538431 kernel: ata1.00: applying bridge limits Jan 23 18:49:12.542886 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 18:49:12.543418 kernel: ata1.00: LPM support broken, forcing max_power Jan 23 18:49:12.547480 kernel: ata1.00: configured for UDMA/100 Jan 23 18:49:12.556255 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 18:49:12.621835 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 18:49:12.622368 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:49:12.627240 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 18:49:12.638655 kernel: usbcore: registered new interface driver usbhid Jan 23 18:49:12.638735 kernel: usbhid: USB HID core driver Jan 23 18:49:12.654160 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input4 Jan 23 18:49:12.654249 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 23 18:49:12.654603 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 23 18:49:12.975105 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:49:12.976984 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:49:12.977986 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:49:12.979583 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:49:12.982820 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:49:13.015027 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:49:13.321073 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 18:49:13.324361 disk-uuid[654]: The operation has completed successfully. Jan 23 18:49:13.409667 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:49:13.409839 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:49:13.459876 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 18:49:13.481407 sh[687]: Success Jan 23 18:49:13.510427 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:49:13.510524 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:49:13.516933 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:49:13.533318 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 18:49:13.604015 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:49:13.610360 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 18:49:13.627485 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 18:49:13.648248 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (699) Jan 23 18:49:13.648319 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 18:49:13.653635 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:13.675565 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 18:49:13.675641 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:49:13.675664 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:49:13.681998 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 18:49:13.683758 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:49:13.685227 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:49:13.687398 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:49:13.691389 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:49:13.742261 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (732) Jan 23 18:49:13.753130 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:13.753214 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:13.769644 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:49:13.769697 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:49:13.769719 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:49:13.782231 kernel: BTRFS info (device sda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:13.784621 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:49:13.789407 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:49:13.930456 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:49:13.940581 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:49:13.945645 ignition[799]: Ignition 2.22.0 Jan 23 18:49:13.946173 ignition[799]: Stage: fetch-offline Jan 23 18:49:13.946548 ignition[799]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:13.946557 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:13.946620 ignition[799]: parsed url from cmdline: "" Jan 23 18:49:13.946624 ignition[799]: no config URL provided Jan 23 18:49:13.946628 ignition[799]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:49:13.946634 ignition[799]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:49:13.946639 ignition[799]: failed to fetch config: resource requires networking Jan 23 18:49:13.947256 ignition[799]: Ignition finished successfully Jan 23 18:49:13.959859 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:49:13.983985 systemd-networkd[872]: lo: Link UP Jan 23 18:49:13.984002 systemd-networkd[872]: lo: Gained carrier Jan 23 18:49:13.988545 systemd-networkd[872]: Enumeration completed Jan 23 18:49:13.988664 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:49:13.989471 systemd[1]: Reached target network.target - Network. Jan 23 18:49:13.990882 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:13.990890 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:49:13.991572 systemd-networkd[872]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:13.991578 systemd-networkd[872]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:49:13.992880 systemd-networkd[872]: eth0: Link UP Jan 23 18:49:13.993151 systemd-networkd[872]: eth1: Link UP Jan 23 18:49:13.993475 systemd-networkd[872]: eth0: Gained carrier Jan 23 18:49:13.993489 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:13.994968 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:49:14.000889 systemd-networkd[872]: eth1: Gained carrier Jan 23 18:49:14.000904 systemd-networkd[872]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:14.042262 ignition[876]: Ignition 2.22.0 Jan 23 18:49:14.042280 ignition[876]: Stage: fetch Jan 23 18:49:14.042439 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:14.042454 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:14.042551 ignition[876]: parsed url from cmdline: "" Jan 23 18:49:14.042558 ignition[876]: no config URL provided Jan 23 18:49:14.042566 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:49:14.046257 systemd-networkd[872]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 18:49:14.042579 ignition[876]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:49:14.042612 ignition[876]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 23 18:49:14.042814 ignition[876]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 18:49:14.059260 systemd-networkd[872]: eth0: DHCPv4 address 89.167.0.15/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 18:49:14.243124 ignition[876]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 23 18:49:14.247878 ignition[876]: GET result: OK Jan 23 18:49:14.247982 ignition[876]: parsing config with SHA512: 769dba63fc7b57cd53624264976914aecb9c006c1dc3d3b3cf0a9f3116d8b6d312bcd557f512947f05f15a417781371257870ffb1303cf5bea4d311cc914553a Jan 23 18:49:14.253452 unknown[876]: fetched base config from "system" Jan 23 18:49:14.253475 unknown[876]: fetched base config from "system" Jan 23 18:49:14.254382 ignition[876]: fetch: fetch complete Jan 23 18:49:14.253486 unknown[876]: fetched user config from "hetzner" Jan 23 18:49:14.254399 ignition[876]: fetch: fetch passed Jan 23 18:49:14.254474 ignition[876]: Ignition finished successfully Jan 23 18:49:14.259734 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:49:14.263579 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:49:14.315946 ignition[883]: Ignition 2.22.0 Jan 23 18:49:14.315970 ignition[883]: Stage: kargs Jan 23 18:49:14.316239 ignition[883]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:14.316268 ignition[883]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:14.317334 ignition[883]: kargs: kargs passed Jan 23 18:49:14.317410 ignition[883]: Ignition finished successfully Jan 23 18:49:14.320849 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:49:14.324837 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:49:14.383415 ignition[890]: Ignition 2.22.0 Jan 23 18:49:14.384589 ignition[890]: Stage: disks Jan 23 18:49:14.384800 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:14.384819 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:14.385937 ignition[890]: disks: disks passed Jan 23 18:49:14.386011 ignition[890]: Ignition finished successfully Jan 23 18:49:14.393525 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:49:14.394770 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:49:14.395843 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:49:14.397279 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:49:14.398748 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:49:14.400268 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:49:14.403527 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:49:14.443661 systemd-fsck[899]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 18:49:14.448802 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:49:14.453372 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:49:14.586338 kernel: EXT4-fs (sda9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 18:49:14.586606 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:49:14.587398 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:49:14.589203 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:49:14.591230 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:49:14.594287 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 18:49:14.594958 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:49:14.595630 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:49:14.608754 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:49:14.612979 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:49:14.618364 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (907) Jan 23 18:49:14.621462 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:14.624594 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:14.652150 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:49:14.652249 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:49:14.652275 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:49:14.653387 coreos-metadata[909]: Jan 23 18:49:14.653 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 23 18:49:14.658850 coreos-metadata[909]: Jan 23 18:49:14.654 INFO Fetch successful Jan 23 18:49:14.658850 coreos-metadata[909]: Jan 23 18:49:14.656 INFO wrote hostname ci-4459-2-3-1-de7581f71a to /sysroot/etc/hostname Jan 23 18:49:14.657848 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:49:14.661461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:49:14.687011 initrd-setup-root[935]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 18:49:14.693549 initrd-setup-root[942]: cut: /sysroot/etc/group: No such file or directory Jan 23 18:49:14.701247 initrd-setup-root[949]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 18:49:14.706680 initrd-setup-root[956]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 18:49:14.857145 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:49:14.861035 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:49:14.865376 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:49:14.887007 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:49:14.895282 kernel: BTRFS info (device sda6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:14.913321 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:49:14.945682 ignition[1025]: INFO : Ignition 2.22.0 Jan 23 18:49:14.945682 ignition[1025]: INFO : Stage: mount Jan 23 18:49:14.950015 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:14.950015 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:14.950015 ignition[1025]: INFO : mount: mount passed Jan 23 18:49:14.950015 ignition[1025]: INFO : Ignition finished successfully Jan 23 18:49:14.952093 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:49:14.955554 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:49:14.983841 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:49:15.028240 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1036) Jan 23 18:49:15.034780 kernel: BTRFS info (device sda6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:49:15.034844 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:49:15.051340 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 18:49:15.051416 kernel: BTRFS info (device sda6): turning on async discard Jan 23 18:49:15.051439 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 18:49:15.060001 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:49:15.112705 ignition[1052]: INFO : Ignition 2.22.0 Jan 23 18:49:15.112705 ignition[1052]: INFO : Stage: files Jan 23 18:49:15.115419 ignition[1052]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:15.115419 ignition[1052]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:15.115419 ignition[1052]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:49:15.118390 ignition[1052]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:49:15.118390 ignition[1052]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:49:15.120921 ignition[1052]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:49:15.122346 ignition[1052]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:49:15.123340 ignition[1052]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:49:15.122578 unknown[1052]: wrote ssh authorized keys file for user: core Jan 23 18:49:15.126627 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:49:15.128631 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 18:49:15.337993 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:49:15.522412 systemd-networkd[872]: eth0: Gained IPv6LL Jan 23 18:49:15.657273 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:49:15.657273 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:49:15.660270 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:49:15.660270 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:49:15.660270 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:49:15.660270 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:49:15.660270 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:49:15.660270 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:49:15.660270 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:49:15.666042 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:49:15.666042 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:49:15.666042 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:49:15.666042 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:49:15.666042 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:49:15.666042 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 18:49:15.906558 systemd-networkd[872]: eth1: Gained IPv6LL Jan 23 18:49:16.076385 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:49:16.511890 ignition[1052]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 18:49:16.511890 ignition[1052]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:49:16.515777 ignition[1052]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:49:16.520177 ignition[1052]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:49:16.520177 ignition[1052]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:49:16.520177 ignition[1052]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 18:49:16.525299 ignition[1052]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 18:49:16.525299 ignition[1052]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 18:49:16.525299 ignition[1052]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 18:49:16.525299 ignition[1052]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:49:16.525299 ignition[1052]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:49:16.525299 ignition[1052]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:49:16.525299 ignition[1052]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:49:16.525299 ignition[1052]: INFO : files: files passed Jan 23 18:49:16.525299 ignition[1052]: INFO : Ignition finished successfully Jan 23 18:49:16.529874 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:49:16.533119 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:49:16.539464 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:49:16.557105 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:49:16.558406 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:49:16.571071 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:49:16.572612 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:49:16.575281 initrd-setup-root-after-ignition[1086]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:49:16.579110 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:49:16.581942 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:49:16.585564 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:49:16.663934 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:49:16.664221 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:49:16.666659 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:49:16.668165 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:49:16.670177 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:49:16.671703 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:49:16.726743 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:49:16.730693 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:49:16.761656 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:49:16.763910 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:49:16.765063 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:49:16.767010 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:49:16.767257 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:49:16.769790 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:49:16.771688 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:49:16.773446 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:49:16.775118 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:49:16.776951 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:49:16.778673 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:49:16.780604 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:49:16.782397 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:49:16.784229 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:49:16.786027 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:49:16.787775 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:49:16.789639 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:49:16.789897 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:49:16.792349 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:49:16.794080 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:49:16.795962 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:49:16.796143 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:49:16.797697 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:49:16.797939 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:49:16.800376 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:49:16.800567 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:49:16.802310 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:49:16.802544 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:49:16.804021 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 18:49:16.804322 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:49:16.808338 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:49:16.812205 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:49:16.815942 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:49:16.818389 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:49:16.819577 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:49:16.819802 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:49:16.831430 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:49:16.833322 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:49:16.862225 ignition[1107]: INFO : Ignition 2.22.0 Jan 23 18:49:16.862225 ignition[1107]: INFO : Stage: umount Jan 23 18:49:16.866304 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:49:16.866304 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 18:49:16.866304 ignition[1107]: INFO : umount: umount passed Jan 23 18:49:16.866304 ignition[1107]: INFO : Ignition finished successfully Jan 23 18:49:16.867425 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:49:16.870401 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:49:16.870578 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:49:16.872927 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:49:16.873052 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:49:16.874829 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:49:16.874907 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:49:16.876103 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:49:16.876173 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:49:16.877440 systemd[1]: Stopped target network.target - Network. Jan 23 18:49:16.878655 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:49:16.878733 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:49:16.879967 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:49:16.881223 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:49:16.885297 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:49:16.886001 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:49:16.887320 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:49:16.888613 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:49:16.888683 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:49:16.889873 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:49:16.889933 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:49:16.891058 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:49:16.891151 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:49:16.892347 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:49:16.892413 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:49:16.893799 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:49:16.894977 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:49:16.898015 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:49:16.898231 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:49:16.900630 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:49:16.900767 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:49:16.903839 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:49:16.904034 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:49:16.910710 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 18:49:16.911072 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:49:16.911299 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:49:16.913836 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 18:49:16.915176 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:49:16.915927 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:49:16.916001 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:49:16.918629 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:49:16.920325 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:49:16.920465 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:49:16.921396 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:49:16.921479 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:49:16.926453 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:49:16.926540 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:49:16.928003 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:49:16.928070 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:49:16.929614 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:49:16.935896 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:49:16.936003 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:49:16.949690 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:49:16.949952 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:49:16.953099 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:49:16.953254 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:49:16.954031 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:49:16.954098 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:49:16.955494 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:49:16.955570 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:49:16.959039 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:49:16.959129 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:49:16.960896 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:49:16.960977 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:49:16.963751 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:49:16.967281 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:49:16.967365 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:49:16.968963 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:49:16.969036 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:49:16.973100 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 18:49:16.973234 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:49:16.977324 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:49:16.977400 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:49:16.979392 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:49:16.979465 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:16.985548 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 18:49:16.985636 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jan 23 18:49:16.985702 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 18:49:16.985770 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:49:16.987276 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:49:16.987441 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:49:16.991375 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:49:16.991535 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:49:16.993439 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:49:16.995347 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:49:17.030005 systemd[1]: Switching root. Jan 23 18:49:17.081618 systemd-journald[199]: Journal stopped Jan 23 18:49:18.488810 systemd-journald[199]: Received SIGTERM from PID 1 (systemd). Jan 23 18:49:18.488875 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:49:18.488886 kernel: SELinux: policy capability open_perms=1 Jan 23 18:49:18.488900 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:49:18.488908 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:49:18.488916 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:49:18.488927 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:49:18.488935 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:49:18.488949 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:49:18.488957 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:49:18.488965 kernel: audit: type=1403 audit(1769194157.351:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 18:49:18.488979 systemd[1]: Successfully loaded SELinux policy in 91.631ms. Jan 23 18:49:18.489000 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.293ms. Jan 23 18:49:18.489010 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:49:18.489021 systemd[1]: Detected virtualization kvm. Jan 23 18:49:18.489030 systemd[1]: Detected architecture x86-64. Jan 23 18:49:18.489038 systemd[1]: Detected first boot. Jan 23 18:49:18.489046 systemd[1]: Hostname set to . Jan 23 18:49:18.489055 systemd[1]: Initializing machine ID from VM UUID. Jan 23 18:49:18.489064 zram_generator::config[1151]: No configuration found. Jan 23 18:49:18.489075 kernel: Guest personality initialized and is inactive Jan 23 18:49:18.489083 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 18:49:18.489091 kernel: Initialized host personality Jan 23 18:49:18.489108 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:49:18.489121 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:49:18.489131 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 18:49:18.489141 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:49:18.489150 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:49:18.489159 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:49:18.489171 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:49:18.489604 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:49:18.489618 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:49:18.489628 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:49:18.489637 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:49:18.489646 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:49:18.489656 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:49:18.489664 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:49:18.489673 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:49:18.489683 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:49:18.489695 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:49:18.489704 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:49:18.489714 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:49:18.489723 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:49:18.489732 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:49:18.489741 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:49:18.489752 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:49:18.489764 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:49:18.489773 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:49:18.489782 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:49:18.489791 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:49:18.489799 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:49:18.489808 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:49:18.489817 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:49:18.489826 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:49:18.489837 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:49:18.489846 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:49:18.489854 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:49:18.489863 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:49:18.489872 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:49:18.489905 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:49:18.489914 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:49:18.489923 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:49:18.489932 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:49:18.489943 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:49:18.489952 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:18.489960 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:49:18.489969 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:49:18.489978 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:49:18.489987 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:49:18.489996 systemd[1]: Reached target machines.target - Containers. Jan 23 18:49:18.490004 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:49:18.490013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:18.490024 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:49:18.490033 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:49:18.490042 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:49:18.490051 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:49:18.490059 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:49:18.490068 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:49:18.490077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:49:18.490086 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:49:18.490098 kernel: ACPI: bus type drm_connector registered Jan 23 18:49:18.490115 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:49:18.490124 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:49:18.490133 kernel: loop: module loaded Jan 23 18:49:18.490141 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:49:18.490150 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:49:18.490163 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:18.490176 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:49:18.490203 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:49:18.490214 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:49:18.490224 kernel: fuse: init (API version 7.41) Jan 23 18:49:18.490232 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:49:18.490241 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:49:18.490251 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:49:18.490262 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 18:49:18.490271 systemd[1]: Stopped verity-setup.service. Jan 23 18:49:18.490280 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:18.490288 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:49:18.490317 systemd-journald[1235]: Collecting audit messages is disabled. Jan 23 18:49:18.490336 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:49:18.490345 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:49:18.490355 systemd-journald[1235]: Journal started Jan 23 18:49:18.490371 systemd-journald[1235]: Runtime Journal (/run/log/journal/482b9403240e4b7da1d5bedff195648d) is 8M, max 76.1M, 68.1M free. Jan 23 18:49:18.063383 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:49:18.071477 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 18:49:18.072595 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:49:18.493211 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:49:18.493813 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:49:18.494376 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:49:18.494861 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:49:18.495555 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:49:18.496246 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:49:18.496892 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:49:18.497057 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:49:18.497791 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:49:18.497940 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:49:18.498630 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:49:18.498822 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:49:18.499708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:49:18.499896 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:49:18.500535 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:49:18.500681 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:49:18.501376 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:49:18.501579 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:49:18.502293 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:49:18.502909 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:49:18.503679 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:49:18.504482 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:49:18.516148 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:49:18.521251 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:49:18.528754 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:49:18.529127 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:49:18.529148 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:49:18.530163 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:49:18.532358 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:49:18.532834 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:18.536770 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:49:18.540008 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:49:18.540656 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:49:18.542353 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:49:18.542718 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:49:18.548089 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:49:18.551826 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:49:18.562275 systemd-journald[1235]: Time spent on flushing to /var/log/journal/482b9403240e4b7da1d5bedff195648d is 77.762ms for 1244 entries. Jan 23 18:49:18.562275 systemd-journald[1235]: System Journal (/var/log/journal/482b9403240e4b7da1d5bedff195648d) is 8M, max 584.8M, 576.8M free. Jan 23 18:49:18.665262 systemd-journald[1235]: Received client request to flush runtime journal. Jan 23 18:49:18.665310 kernel: loop0: detected capacity change from 0 to 8 Jan 23 18:49:18.665333 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:49:18.665346 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 18:49:18.556321 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:49:18.560156 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:49:18.561345 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:49:18.561987 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:49:18.573329 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:49:18.576331 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:49:18.644696 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:49:18.664471 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 23 18:49:18.664481 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Jan 23 18:49:18.669534 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:49:18.673979 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:49:18.678268 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:49:18.680655 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:49:18.682689 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:49:18.699658 kernel: loop2: detected capacity change from 0 to 110984 Jan 23 18:49:18.717849 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:49:18.721298 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:49:18.734539 kernel: loop3: detected capacity change from 0 to 229808 Jan 23 18:49:18.741412 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 23 18:49:18.741649 systemd-tmpfiles[1301]: ACLs are not supported, ignoring. Jan 23 18:49:18.748381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:49:18.782209 kernel: loop4: detected capacity change from 0 to 8 Jan 23 18:49:18.785663 kernel: loop5: detected capacity change from 0 to 128560 Jan 23 18:49:18.802227 kernel: loop6: detected capacity change from 0 to 110984 Jan 23 18:49:18.820263 kernel: loop7: detected capacity change from 0 to 229808 Jan 23 18:49:18.842042 (sd-merge)[1306]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 23 18:49:18.843337 (sd-merge)[1306]: Merged extensions into '/usr'. Jan 23 18:49:18.850307 systemd[1]: Reload requested from client PID 1276 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:49:18.850394 systemd[1]: Reloading... Jan 23 18:49:18.930219 zram_generator::config[1331]: No configuration found. Jan 23 18:49:19.025652 ldconfig[1271]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:49:19.081538 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:49:19.081907 systemd[1]: Reloading finished in 229 ms. Jan 23 18:49:19.112619 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:49:19.113450 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:49:19.115241 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:49:19.128845 systemd[1]: Starting ensure-sysext.service... Jan 23 18:49:19.133316 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:49:19.139440 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:49:19.161290 systemd[1]: Reload requested from client PID 1377 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:49:19.161316 systemd[1]: Reloading... Jan 23 18:49:19.173495 systemd-udevd[1379]: Using default interface naming scheme 'v255'. Jan 23 18:49:19.175689 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:49:19.176171 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:49:19.176499 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:49:19.176765 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 18:49:19.177618 systemd-tmpfiles[1378]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 18:49:19.177885 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Jan 23 18:49:19.177973 systemd-tmpfiles[1378]: ACLs are not supported, ignoring. Jan 23 18:49:19.183432 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:49:19.183568 systemd-tmpfiles[1378]: Skipping /boot Jan 23 18:49:19.201291 systemd-tmpfiles[1378]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:49:19.201302 systemd-tmpfiles[1378]: Skipping /boot Jan 23 18:49:19.282213 zram_generator::config[1435]: No configuration found. Jan 23 18:49:19.480523 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:49:19.480623 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input5 Jan 23 18:49:19.499223 kernel: ACPI: button: Power Button [PWRF] Jan 23 18:49:19.516322 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:49:19.516725 systemd[1]: Reloading finished in 354 ms. Jan 23 18:49:19.524699 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:49:19.526222 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:49:19.564037 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 23 18:49:19.568615 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:19.572820 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 18:49:19.573046 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 18:49:19.573234 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 18:49:19.575352 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:49:19.577431 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:49:19.579329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:19.580308 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:49:19.581397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:49:19.583373 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:49:19.584332 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:19.584410 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:19.587624 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:49:19.593583 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:49:19.597424 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:49:19.602379 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:49:19.602706 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:19.606045 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:19.606333 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:19.606603 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:19.606703 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:19.606798 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:19.615173 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:19.615352 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:49:19.619392 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:49:19.620355 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:49:19.620383 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:49:19.628410 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:49:19.629053 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:49:19.630031 systemd[1]: Finished ensure-sysext.service. Jan 23 18:49:19.630678 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:49:19.631245 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:49:19.635453 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:49:19.640346 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 18:49:19.641527 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:49:19.642354 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:49:19.652079 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 18:49:19.654021 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:49:19.656836 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:49:19.657002 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:49:19.658516 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:49:19.664599 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:49:19.672497 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:49:19.675535 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:49:19.676154 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:49:19.676387 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:49:19.696810 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:49:19.705472 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:49:19.708349 augenrules[1553]: No rules Jan 23 18:49:19.709457 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:49:19.709695 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:49:19.711831 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:49:19.712926 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:49:19.746206 kernel: EDAC MC: Ver: 3.0.0 Jan 23 18:49:19.752373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:19.760400 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:49:19.760908 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:19.763346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:19.778869 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:49:19.784508 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 23 18:49:19.784546 kernel: Console: switching to colour dummy device 80x25 Jan 23 18:49:19.787831 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 23 18:49:19.788016 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 18:49:19.788031 kernel: [drm] features: -context_init Jan 23 18:49:19.813237 kernel: [drm] number of scanouts: 1 Jan 23 18:49:19.813279 kernel: [drm] number of cap sets: 0 Jan 23 18:49:19.819244 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 18:49:19.821266 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 23 18:49:19.827622 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 18:49:19.837210 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 18:49:19.840671 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:49:19.841819 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:19.847284 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:49:19.918728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:49:19.942029 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 18:49:19.943489 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:49:19.944925 systemd-networkd[1515]: lo: Link UP Jan 23 18:49:19.944936 systemd-networkd[1515]: lo: Gained carrier Jan 23 18:49:19.949115 systemd-networkd[1515]: Enumeration completed Jan 23 18:49:19.949485 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:49:19.950371 systemd-networkd[1515]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:19.950430 systemd-networkd[1515]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:49:19.952118 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:49:19.952541 systemd-networkd[1515]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:19.952546 systemd-networkd[1515]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:49:19.953483 systemd-networkd[1515]: eth0: Link UP Jan 23 18:49:19.953646 systemd-networkd[1515]: eth0: Gained carrier Jan 23 18:49:19.953657 systemd-networkd[1515]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:19.955287 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:49:19.958574 systemd-resolved[1516]: Positive Trust Anchors: Jan 23 18:49:19.958793 systemd-resolved[1516]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:49:19.958842 systemd-resolved[1516]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:49:19.961380 systemd-networkd[1515]: eth1: Link UP Jan 23 18:49:19.962610 systemd-resolved[1516]: Using system hostname 'ci-4459-2-3-1-de7581f71a'. Jan 23 18:49:19.962791 systemd-networkd[1515]: eth1: Gained carrier Jan 23 18:49:19.962807 systemd-networkd[1515]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:49:19.964388 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:49:19.964495 systemd[1]: Reached target network.target - Network. Jan 23 18:49:19.964547 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:49:19.964599 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:49:19.964724 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:49:19.964802 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:49:19.964872 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:49:19.965054 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:49:19.965201 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:49:19.965261 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:49:19.965313 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:49:19.965332 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:49:19.965372 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:49:19.966893 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:49:19.968928 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:49:19.971652 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:49:19.974749 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:49:19.975454 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:49:19.981729 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:49:19.984780 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:49:19.986157 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:49:19.988336 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:49:19.993413 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:49:19.995172 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:49:19.995681 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:49:19.995706 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:49:19.996256 systemd-networkd[1515]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 18:49:19.996717 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:49:19.997163 systemd-timesyncd[1528]: Network configuration changed, trying to establish connection. Jan 23 18:49:19.998711 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:49:20.002312 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:49:20.005286 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:49:20.008505 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:49:20.011237 systemd-networkd[1515]: eth0: DHCPv4 address 89.167.0.15/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 18:49:20.012071 systemd-timesyncd[1528]: Network configuration changed, trying to establish connection. Jan 23 18:49:20.014301 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:49:20.018049 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:49:20.020329 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:49:20.024537 jq[1592]: false Jan 23 18:49:20.025356 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:49:20.031044 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:49:20.034050 coreos-metadata[1589]: Jan 23 18:49:20.033 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 23 18:49:20.036822 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 23 18:49:20.042310 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:49:20.045618 coreos-metadata[1589]: Jan 23 18:49:20.045 INFO Fetch successful Jan 23 18:49:20.045618 coreos-metadata[1589]: Jan 23 18:49:20.045 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 23 18:49:20.045880 coreos-metadata[1589]: Jan 23 18:49:20.045 INFO Fetch successful Jan 23 18:49:20.049993 extend-filesystems[1593]: Found /dev/sda6 Jan 23 18:49:20.052060 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Refreshing passwd entry cache Jan 23 18:49:20.050016 oslogin_cache_refresh[1596]: Refreshing passwd entry cache Jan 23 18:49:20.053362 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:49:20.058428 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Failure getting users, quitting Jan 23 18:49:20.058428 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:49:20.058420 oslogin_cache_refresh[1596]: Failure getting users, quitting Jan 23 18:49:20.058435 oslogin_cache_refresh[1596]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:49:20.061251 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Refreshing group entry cache Jan 23 18:49:20.059643 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:49:20.059231 oslogin_cache_refresh[1596]: Refreshing group entry cache Jan 23 18:49:20.064489 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:49:20.064859 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:49:20.065755 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Failure getting groups, quitting Jan 23 18:49:20.065755 google_oslogin_nss_cache[1596]: oslogin_cache_refresh[1596]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:49:20.065750 oslogin_cache_refresh[1596]: Failure getting groups, quitting Jan 23 18:49:20.065760 oslogin_cache_refresh[1596]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:49:20.066712 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:49:20.071611 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:49:20.077483 extend-filesystems[1593]: Found /dev/sda9 Jan 23 18:49:20.090193 extend-filesystems[1593]: Checking size of /dev/sda9 Jan 23 18:49:20.090722 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:49:20.097885 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:49:20.098070 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:49:20.100484 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:49:20.100668 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:49:20.102725 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:49:20.102901 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:49:20.105302 extend-filesystems[1593]: Resized partition /dev/sda9 Jan 23 18:49:20.110195 extend-filesystems[1626]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 18:49:20.113790 jq[1613]: true Jan 23 18:49:20.114722 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:49:20.114915 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:49:20.116215 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Jan 23 18:49:20.123196 update_engine[1610]: I20260123 18:49:20.123119 1610 main.cc:92] Flatcar Update Engine starting Jan 23 18:49:20.148517 (ntainerd)[1630]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 18:49:20.165681 jq[1629]: true Jan 23 18:49:20.174144 tar[1627]: linux-amd64/LICENSE Jan 23 18:49:20.174394 tar[1627]: linux-amd64/helm Jan 23 18:49:20.199287 dbus-daemon[1590]: [system] SELinux support is enabled Jan 23 18:49:20.199659 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:49:20.203617 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:49:20.204229 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:49:20.206488 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:49:20.206502 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:49:20.207382 update_engine[1610]: I20260123 18:49:20.207091 1610 update_check_scheduler.cc:74] Next update check in 5m32s Jan 23 18:49:20.212678 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:49:20.217435 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:49:20.232973 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:49:20.239349 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:49:20.259985 systemd-logind[1606]: New seat seat0. Jan 23 18:49:20.262650 systemd-logind[1606]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 18:49:20.262677 systemd-logind[1606]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:49:20.262807 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:49:20.345002 bash[1673]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:49:20.345717 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:49:20.352697 systemd[1]: Starting sshkeys.service... Jan 23 18:49:20.389672 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 18:49:20.396806 containerd[1630]: time="2026-01-23T18:49:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:49:20.393624 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 18:49:20.400312 containerd[1630]: time="2026-01-23T18:49:20.399874317Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.417687302Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.19µs" Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.417712722Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.417728912Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.417858932Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.417868722Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.417887762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.417939812Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.417947082Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.418167693Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.418190743Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.420413495Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:49:20.420514 containerd[1630]: time="2026-01-23T18:49:20.420421365Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:49:20.420852 containerd[1630]: time="2026-01-23T18:49:20.420754435Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:49:20.424441 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Jan 23 18:49:20.439499 containerd[1630]: time="2026-01-23T18:49:20.425111708Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:49:20.439499 containerd[1630]: time="2026-01-23T18:49:20.425154148Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:49:20.439499 containerd[1630]: time="2026-01-23T18:49:20.425161838Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:49:20.439499 containerd[1630]: time="2026-01-23T18:49:20.425225089Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:49:20.439499 containerd[1630]: time="2026-01-23T18:49:20.425396999Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:49:20.437878 locksmithd[1652]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:49:20.440518 coreos-metadata[1677]: Jan 23 18:49:20.434 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 23 18:49:20.440518 coreos-metadata[1677]: Jan 23 18:49:20.435 INFO Fetch successful Jan 23 18:49:20.440966 containerd[1630]: time="2026-01-23T18:49:20.440176111Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:49:20.441102 unknown[1677]: wrote ssh authorized keys file for user: core Jan 23 18:49:20.441820 sshd_keygen[1611]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:49:20.443715 extend-filesystems[1626]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 18:49:20.443715 extend-filesystems[1626]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 23 18:49:20.443715 extend-filesystems[1626]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Jan 23 18:49:20.472781 extend-filesystems[1593]: Resized filesystem in /dev/sda9 Jan 23 18:49:20.450499 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449242529Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449276529Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449286509Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449294709Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449304729Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449311849Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449326439Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449334849Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449342689Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449350209Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449356649Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449365569Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449455219Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:49:20.475229 containerd[1630]: time="2026-01-23T18:49:20.449467529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:49:20.450694 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449477289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449487209Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449494979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449510809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449518549Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449526939Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449534399Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449542779Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449550129Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449582749Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449591449Z" level=info msg="Start snapshots syncer" Jan 23 18:49:20.476571 containerd[1630]: time="2026-01-23T18:49:20.449609479Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:49:20.476736 containerd[1630]: time="2026-01-23T18:49:20.449777139Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:49:20.476736 containerd[1630]: time="2026-01-23T18:49:20.449809359Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.455365084Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.455485164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.455528404Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.455537114Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.455544344Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.455553934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.455562004Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.455570574Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.455607374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.455615564Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.455624784Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.456240874Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.456258574Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:49:20.476826 containerd[1630]: time="2026-01-23T18:49:20.456265514Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:49:20.477515 containerd[1630]: time="2026-01-23T18:49:20.456272494Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:49:20.477515 containerd[1630]: time="2026-01-23T18:49:20.456278034Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:49:20.477515 containerd[1630]: time="2026-01-23T18:49:20.456331924Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:49:20.477515 containerd[1630]: time="2026-01-23T18:49:20.456344964Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:49:20.477515 containerd[1630]: time="2026-01-23T18:49:20.456357734Z" level=info msg="runtime interface created" Jan 23 18:49:20.477515 containerd[1630]: time="2026-01-23T18:49:20.456361964Z" level=info msg="created NRI interface" Jan 23 18:49:20.477515 containerd[1630]: time="2026-01-23T18:49:20.456381314Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:49:20.477515 containerd[1630]: time="2026-01-23T18:49:20.456389634Z" level=info msg="Connect containerd service" Jan 23 18:49:20.477515 containerd[1630]: time="2026-01-23T18:49:20.456403385Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:49:20.477515 containerd[1630]: time="2026-01-23T18:49:20.461822739Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:49:20.484268 update-ssh-keys[1688]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:49:20.486208 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 18:49:20.489023 systemd[1]: Finished sshkeys.service. Jan 23 18:49:20.518039 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:49:20.526330 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549279392Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549330432Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549348192Z" level=info msg="Start subscribing containerd event" Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549370832Z" level=info msg="Start recovering state" Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549434852Z" level=info msg="Start event monitor" Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549443182Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549449392Z" level=info msg="Start streaming server" Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549460122Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549466062Z" level=info msg="runtime interface starting up..." Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549470392Z" level=info msg="starting plugins..." Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549481452Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:49:20.549645 containerd[1630]: time="2026-01-23T18:49:20.549568482Z" level=info msg="containerd successfully booted in 0.156854s" Jan 23 18:49:20.550326 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:49:20.555447 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:49:20.556025 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:49:20.560232 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:49:20.574968 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:49:20.581737 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:49:20.586368 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:49:20.591005 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:49:20.656944 tar[1627]: linux-amd64/README.md Jan 23 18:49:20.672007 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:49:21.282430 systemd-networkd[1515]: eth0: Gained IPv6LL Jan 23 18:49:21.283290 systemd-timesyncd[1528]: Network configuration changed, trying to establish connection. Jan 23 18:49:21.287435 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:49:21.288517 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:49:21.292726 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:21.297363 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:49:21.330448 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:49:21.346452 systemd-networkd[1515]: eth1: Gained IPv6LL Jan 23 18:49:21.347049 systemd-timesyncd[1528]: Network configuration changed, trying to establish connection. Jan 23 18:49:22.633252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:22.637076 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:49:22.642047 systemd[1]: Startup finished in 3.163s (kernel) + 6.578s (initrd) + 5.381s (userspace) = 15.122s. Jan 23 18:49:22.649959 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:49:23.474837 kubelet[1738]: E0123 18:49:23.474748 1738 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:49:23.480421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:49:23.480776 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:49:23.481550 systemd[1]: kubelet.service: Consumed 1.655s CPU time, 267.2M memory peak. Jan 23 18:49:25.807651 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:49:25.810431 systemd[1]: Started sshd@0-89.167.0.15:22-20.161.92.111:38420.service - OpenSSH per-connection server daemon (20.161.92.111:38420). Jan 23 18:49:26.614549 sshd[1750]: Accepted publickey for core from 20.161.92.111 port 38420 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:26.617727 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:26.629169 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:49:26.631828 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:49:26.646834 systemd-logind[1606]: New session 1 of user core. Jan 23 18:49:26.662801 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:49:26.668281 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:49:26.684780 (systemd)[1755]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 18:49:26.689918 systemd-logind[1606]: New session c1 of user core. Jan 23 18:49:26.860704 systemd[1755]: Queued start job for default target default.target. Jan 23 18:49:26.871163 systemd[1755]: Created slice app.slice - User Application Slice. Jan 23 18:49:26.871217 systemd[1755]: Reached target paths.target - Paths. Jan 23 18:49:26.871256 systemd[1755]: Reached target timers.target - Timers. Jan 23 18:49:26.872624 systemd[1755]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:49:26.891604 systemd[1755]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:49:26.891716 systemd[1755]: Reached target sockets.target - Sockets. Jan 23 18:49:26.891824 systemd[1755]: Reached target basic.target - Basic System. Jan 23 18:49:26.891917 systemd[1755]: Reached target default.target - Main User Target. Jan 23 18:49:26.891981 systemd[1755]: Startup finished in 190ms. Jan 23 18:49:26.892352 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:49:26.899320 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:49:27.446624 systemd[1]: Started sshd@1-89.167.0.15:22-20.161.92.111:38432.service - OpenSSH per-connection server daemon (20.161.92.111:38432). Jan 23 18:49:28.245254 sshd[1766]: Accepted publickey for core from 20.161.92.111 port 38432 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:28.247033 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:28.256173 systemd-logind[1606]: New session 2 of user core. Jan 23 18:49:28.262430 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:49:28.779664 sshd[1769]: Connection closed by 20.161.92.111 port 38432 Jan 23 18:49:28.781508 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:28.787876 systemd-logind[1606]: Session 2 logged out. Waiting for processes to exit. Jan 23 18:49:28.789393 systemd[1]: sshd@1-89.167.0.15:22-20.161.92.111:38432.service: Deactivated successfully. Jan 23 18:49:28.792667 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 18:49:28.796270 systemd-logind[1606]: Removed session 2. Jan 23 18:49:28.915446 systemd[1]: Started sshd@2-89.167.0.15:22-20.161.92.111:38438.service - OpenSSH per-connection server daemon (20.161.92.111:38438). Jan 23 18:49:29.703449 sshd[1775]: Accepted publickey for core from 20.161.92.111 port 38438 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:29.705871 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:29.714273 systemd-logind[1606]: New session 3 of user core. Jan 23 18:49:29.721410 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:49:30.227062 sshd[1778]: Connection closed by 20.161.92.111 port 38438 Jan 23 18:49:30.228356 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:30.231614 systemd-logind[1606]: Session 3 logged out. Waiting for processes to exit. Jan 23 18:49:30.232355 systemd[1]: sshd@2-89.167.0.15:22-20.161.92.111:38438.service: Deactivated successfully. Jan 23 18:49:30.233811 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 18:49:30.235728 systemd-logind[1606]: Removed session 3. Jan 23 18:49:30.368962 systemd[1]: Started sshd@3-89.167.0.15:22-20.161.92.111:38454.service - OpenSSH per-connection server daemon (20.161.92.111:38454). Jan 23 18:49:31.161239 sshd[1784]: Accepted publickey for core from 20.161.92.111 port 38454 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:31.163061 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:31.172250 systemd-logind[1606]: New session 4 of user core. Jan 23 18:49:31.182414 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:49:31.693429 sshd[1787]: Connection closed by 20.161.92.111 port 38454 Jan 23 18:49:31.695476 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:31.700923 systemd[1]: sshd@3-89.167.0.15:22-20.161.92.111:38454.service: Deactivated successfully. Jan 23 18:49:31.704718 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:49:31.706841 systemd-logind[1606]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:49:31.710223 systemd-logind[1606]: Removed session 4. Jan 23 18:49:31.832653 systemd[1]: Started sshd@4-89.167.0.15:22-20.161.92.111:38462.service - OpenSSH per-connection server daemon (20.161.92.111:38462). Jan 23 18:49:32.629532 sshd[1793]: Accepted publickey for core from 20.161.92.111 port 38462 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:32.631943 sshd-session[1793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:32.640260 systemd-logind[1606]: New session 5 of user core. Jan 23 18:49:32.648392 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:49:33.053567 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:49:33.054174 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:33.072342 sudo[1797]: pam_unix(sudo:session): session closed for user root Jan 23 18:49:33.194591 sshd[1796]: Connection closed by 20.161.92.111 port 38462 Jan 23 18:49:33.196576 sshd-session[1793]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:33.204112 systemd-logind[1606]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:49:33.205417 systemd[1]: sshd@4-89.167.0.15:22-20.161.92.111:38462.service: Deactivated successfully. Jan 23 18:49:33.209513 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:49:33.213250 systemd-logind[1606]: Removed session 5. Jan 23 18:49:33.335985 systemd[1]: Started sshd@5-89.167.0.15:22-20.161.92.111:57600.service - OpenSSH per-connection server daemon (20.161.92.111:57600). Jan 23 18:49:33.582350 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:49:33.585794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:33.783404 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:33.792443 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:49:33.830743 kubelet[1814]: E0123 18:49:33.830695 1814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:49:33.839433 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:49:33.839589 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:49:33.840025 systemd[1]: kubelet.service: Consumed 211ms CPU time, 108.6M memory peak. Jan 23 18:49:34.134241 sshd[1803]: Accepted publickey for core from 20.161.92.111 port 57600 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:34.136012 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:34.145207 systemd-logind[1606]: New session 6 of user core. Jan 23 18:49:34.152417 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:49:34.549488 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:49:34.550085 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:34.558466 sudo[1824]: pam_unix(sudo:session): session closed for user root Jan 23 18:49:34.569072 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:49:34.569723 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:34.586791 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:49:34.649466 augenrules[1846]: No rules Jan 23 18:49:34.650802 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:49:34.651131 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:49:34.653337 sudo[1823]: pam_unix(sudo:session): session closed for user root Jan 23 18:49:34.776457 sshd[1822]: Connection closed by 20.161.92.111 port 57600 Jan 23 18:49:34.778480 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Jan 23 18:49:34.784052 systemd[1]: sshd@5-89.167.0.15:22-20.161.92.111:57600.service: Deactivated successfully. Jan 23 18:49:34.787704 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:49:34.789813 systemd-logind[1606]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:49:34.793249 systemd-logind[1606]: Removed session 6. Jan 23 18:49:34.911762 systemd[1]: Started sshd@6-89.167.0.15:22-20.161.92.111:57614.service - OpenSSH per-connection server daemon (20.161.92.111:57614). Jan 23 18:49:35.706741 sshd[1855]: Accepted publickey for core from 20.161.92.111 port 57614 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:49:35.709245 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:49:35.718262 systemd-logind[1606]: New session 7 of user core. Jan 23 18:49:35.724415 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:49:36.121672 sudo[1859]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:49:36.122303 sudo[1859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:49:36.579316 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:49:36.596886 (dockerd)[1877]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:49:36.937041 dockerd[1877]: time="2026-01-23T18:49:36.936938964Z" level=info msg="Starting up" Jan 23 18:49:36.938404 dockerd[1877]: time="2026-01-23T18:49:36.938327705Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:49:36.958853 dockerd[1877]: time="2026-01-23T18:49:36.958756882Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:49:37.036844 dockerd[1877]: time="2026-01-23T18:49:37.036751437Z" level=info msg="Loading containers: start." Jan 23 18:49:37.053233 kernel: Initializing XFRM netlink socket Jan 23 18:49:37.442039 systemd-timesyncd[1528]: Network configuration changed, trying to establish connection. Jan 23 18:49:37.477291 systemd-timesyncd[1528]: Contacted time server 194.59.205.229:123 (2.flatcar.pool.ntp.org). Jan 23 18:49:37.477861 systemd-timesyncd[1528]: Initial clock synchronization to Fri 2026-01-23 18:49:37.507067 UTC. Jan 23 18:49:37.525680 systemd-networkd[1515]: docker0: Link UP Jan 23 18:49:37.531415 dockerd[1877]: time="2026-01-23T18:49:37.531327119Z" level=info msg="Loading containers: done." Jan 23 18:49:37.553597 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3202453325-merged.mount: Deactivated successfully. Jan 23 18:49:37.556589 dockerd[1877]: time="2026-01-23T18:49:37.556549630Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:49:37.556718 dockerd[1877]: time="2026-01-23T18:49:37.556638630Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:49:37.556780 dockerd[1877]: time="2026-01-23T18:49:37.556766100Z" level=info msg="Initializing buildkit" Jan 23 18:49:37.599867 dockerd[1877]: time="2026-01-23T18:49:37.599602216Z" level=info msg="Completed buildkit initialization" Jan 23 18:49:37.605575 dockerd[1877]: time="2026-01-23T18:49:37.605538601Z" level=info msg="Daemon has completed initialization" Jan 23 18:49:37.605964 dockerd[1877]: time="2026-01-23T18:49:37.605739181Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:49:37.605855 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:49:39.044940 containerd[1630]: time="2026-01-23T18:49:39.044875105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 18:49:39.685330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2097642091.mount: Deactivated successfully. Jan 23 18:49:40.678443 containerd[1630]: time="2026-01-23T18:49:40.678398298Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:40.679485 containerd[1630]: time="2026-01-23T18:49:40.679310467Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114812" Jan 23 18:49:40.680220 containerd[1630]: time="2026-01-23T18:49:40.680200599Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:40.682057 containerd[1630]: time="2026-01-23T18:49:40.682040512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:40.682601 containerd[1630]: time="2026-01-23T18:49:40.682580625Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.63765975s" Jan 23 18:49:40.682643 containerd[1630]: time="2026-01-23T18:49:40.682606368Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 18:49:40.683339 containerd[1630]: time="2026-01-23T18:49:40.683320008Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 18:49:42.176308 containerd[1630]: time="2026-01-23T18:49:42.176236675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:42.177730 containerd[1630]: time="2026-01-23T18:49:42.177688145Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016803" Jan 23 18:49:42.178837 containerd[1630]: time="2026-01-23T18:49:42.178787664Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:42.183139 containerd[1630]: time="2026-01-23T18:49:42.182323305Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:42.183139 containerd[1630]: time="2026-01-23T18:49:42.183043972Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.499702469s" Jan 23 18:49:42.183139 containerd[1630]: time="2026-01-23T18:49:42.183066715Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 18:49:42.184333 containerd[1630]: time="2026-01-23T18:49:42.184319827Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 18:49:43.816206 containerd[1630]: time="2026-01-23T18:49:43.815861266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:43.816710 containerd[1630]: time="2026-01-23T18:49:43.816686481Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158124" Jan 23 18:49:43.817484 containerd[1630]: time="2026-01-23T18:49:43.817465493Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:43.819484 containerd[1630]: time="2026-01-23T18:49:43.819218753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:43.819814 containerd[1630]: time="2026-01-23T18:49:43.819794548Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.635284054s" Jan 23 18:49:43.819854 containerd[1630]: time="2026-01-23T18:49:43.819817069Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 18:49:43.820526 containerd[1630]: time="2026-01-23T18:49:43.820508332Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 18:49:44.082373 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:49:44.085674 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:44.298457 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:44.307447 (kubelet)[2162]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:49:44.332977 kubelet[2162]: E0123 18:49:44.332859 2162 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:49:44.337952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:49:44.338243 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:49:44.338694 systemd[1]: kubelet.service: Consumed 199ms CPU time, 110.4M memory peak. Jan 23 18:49:45.127248 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3875802416.mount: Deactivated successfully. Jan 23 18:49:45.617550 containerd[1630]: time="2026-01-23T18:49:45.617485898Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:45.618547 containerd[1630]: time="2026-01-23T18:49:45.618496969Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930124" Jan 23 18:49:45.619285 containerd[1630]: time="2026-01-23T18:49:45.619259403Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:45.620918 containerd[1630]: time="2026-01-23T18:49:45.620718061Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:45.621519 containerd[1630]: time="2026-01-23T18:49:45.621113416Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.800585989s" Jan 23 18:49:45.621519 containerd[1630]: time="2026-01-23T18:49:45.621155386Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 18:49:45.621658 containerd[1630]: time="2026-01-23T18:49:45.621638375Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 18:49:46.093061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3693361683.mount: Deactivated successfully. Jan 23 18:49:47.035408 containerd[1630]: time="2026-01-23T18:49:47.035354536Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:47.036432 containerd[1630]: time="2026-01-23T18:49:47.036326320Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Jan 23 18:49:47.037260 containerd[1630]: time="2026-01-23T18:49:47.037241895Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:47.039078 containerd[1630]: time="2026-01-23T18:49:47.039061062Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:47.039664 containerd[1630]: time="2026-01-23T18:49:47.039649082Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.417991755s" Jan 23 18:49:47.039725 containerd[1630]: time="2026-01-23T18:49:47.039716042Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 18:49:47.040321 containerd[1630]: time="2026-01-23T18:49:47.040301419Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 18:49:47.508295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount440830111.mount: Deactivated successfully. Jan 23 18:49:47.514591 containerd[1630]: time="2026-01-23T18:49:47.514518754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:49:47.515770 containerd[1630]: time="2026-01-23T18:49:47.515725195Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jan 23 18:49:47.517782 containerd[1630]: time="2026-01-23T18:49:47.516835024Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:49:47.519784 containerd[1630]: time="2026-01-23T18:49:47.519715530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:49:47.521036 containerd[1630]: time="2026-01-23T18:49:47.520690466Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 480.366193ms" Jan 23 18:49:47.521036 containerd[1630]: time="2026-01-23T18:49:47.520733251Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 18:49:47.521925 containerd[1630]: time="2026-01-23T18:49:47.521529701Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 18:49:48.032053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1852323179.mount: Deactivated successfully. Jan 23 18:49:49.289294 containerd[1630]: time="2026-01-23T18:49:49.289225777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:49.290215 containerd[1630]: time="2026-01-23T18:49:49.290065725Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926291" Jan 23 18:49:49.290979 containerd[1630]: time="2026-01-23T18:49:49.290951276Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:49.293000 containerd[1630]: time="2026-01-23T18:49:49.292647266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:49:49.293369 containerd[1630]: time="2026-01-23T18:49:49.293350578Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 1.771791688s" Jan 23 18:49:49.293403 containerd[1630]: time="2026-01-23T18:49:49.293373069Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 18:49:53.323741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:53.324004 systemd[1]: kubelet.service: Consumed 199ms CPU time, 110.4M memory peak. Jan 23 18:49:53.327499 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:53.373390 systemd[1]: Reload requested from client PID 2316 ('systemctl') (unit session-7.scope)... Jan 23 18:49:53.373420 systemd[1]: Reloading... Jan 23 18:49:53.472263 zram_generator::config[2357]: No configuration found. Jan 23 18:49:53.645410 systemd[1]: Reloading finished in 271 ms. Jan 23 18:49:53.701381 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:53.708211 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:53.710648 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:49:53.711006 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:53.711054 systemd[1]: kubelet.service: Consumed 105ms CPU time, 98.4M memory peak. Jan 23 18:49:53.713545 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:53.962894 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:53.975892 (kubelet)[2416]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:49:54.040200 kubelet[2416]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:49:54.040200 kubelet[2416]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:49:54.040200 kubelet[2416]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:49:54.040778 kubelet[2416]: I0123 18:49:54.040293 2416 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:49:54.406131 kubelet[2416]: I0123 18:49:54.406025 2416 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:49:54.406131 kubelet[2416]: I0123 18:49:54.406045 2416 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:49:54.406519 kubelet[2416]: I0123 18:49:54.406357 2416 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:49:54.428137 kubelet[2416]: I0123 18:49:54.426925 2416 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:49:54.428137 kubelet[2416]: E0123 18:49:54.428088 2416 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://89.167.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 89.167.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:49:54.434035 kubelet[2416]: I0123 18:49:54.434010 2416 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:49:54.441575 kubelet[2416]: I0123 18:49:54.441546 2416 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:49:54.442012 kubelet[2416]: I0123 18:49:54.441975 2416 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:49:54.442248 kubelet[2416]: I0123 18:49:54.442014 2416 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-1-de7581f71a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:49:54.442334 kubelet[2416]: I0123 18:49:54.442256 2416 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:49:54.442334 kubelet[2416]: I0123 18:49:54.442270 2416 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:49:54.443679 kubelet[2416]: I0123 18:49:54.443654 2416 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:49:54.446794 kubelet[2416]: I0123 18:49:54.446772 2416 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:49:54.446838 kubelet[2416]: I0123 18:49:54.446810 2416 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:49:54.446862 kubelet[2416]: I0123 18:49:54.446841 2416 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:49:54.446877 kubelet[2416]: I0123 18:49:54.446865 2416 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:49:54.457886 kubelet[2416]: I0123 18:49:54.457536 2416 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:49:54.457886 kubelet[2416]: I0123 18:49:54.457821 2416 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:49:54.458497 kubelet[2416]: W0123 18:49:54.458486 2416 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:49:54.458682 kubelet[2416]: E0123 18:49:54.458482 2416 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://89.167.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-3-1-de7581f71a&limit=500&resourceVersion=0\": dial tcp 89.167.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:49:54.458966 kubelet[2416]: E0123 18:49:54.458934 2416 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://89.167.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 89.167.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:49:54.461372 kubelet[2416]: I0123 18:49:54.461360 2416 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:49:54.461468 kubelet[2416]: I0123 18:49:54.461460 2416 server.go:1289] "Started kubelet" Jan 23 18:49:54.465146 kubelet[2416]: I0123 18:49:54.465107 2416 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:49:54.467842 kubelet[2416]: I0123 18:49:54.467831 2416 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:49:54.469198 kubelet[2416]: I0123 18:49:54.469058 2416 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:49:54.471085 kubelet[2416]: I0123 18:49:54.470582 2416 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:49:54.471085 kubelet[2416]: I0123 18:49:54.470746 2416 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:49:54.472133 kubelet[2416]: E0123 18:49:54.471201 2416 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://89.167.0.15:6443/api/v1/namespaces/default/events\": dial tcp 89.167.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-3-1-de7581f71a.188d70bf0b8eba7d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-3-1-de7581f71a,UID:ci-4459-2-3-1-de7581f71a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-3-1-de7581f71a,},FirstTimestamp:2026-01-23 18:49:54.461440637 +0000 UTC m=+0.477958355,LastTimestamp:2026-01-23 18:49:54.461440637 +0000 UTC m=+0.477958355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-1-de7581f71a,}" Jan 23 18:49:54.472793 kubelet[2416]: I0123 18:49:54.472781 2416 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:49:54.475988 kubelet[2416]: E0123 18:49:54.475978 2416 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:49:54.476218 kubelet[2416]: E0123 18:49:54.476208 2416 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-3-1-de7581f71a\" not found" Jan 23 18:49:54.476289 kubelet[2416]: I0123 18:49:54.476283 2416 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:49:54.476428 kubelet[2416]: I0123 18:49:54.476419 2416 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:49:54.476494 kubelet[2416]: I0123 18:49:54.476488 2416 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:49:54.476897 kubelet[2416]: I0123 18:49:54.476887 2416 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:49:54.476995 kubelet[2416]: I0123 18:49:54.476985 2416 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:49:54.477506 kubelet[2416]: E0123 18:49:54.477306 2416 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://89.167.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 89.167.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:49:54.478333 kubelet[2416]: E0123 18:49:54.478315 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-1-de7581f71a?timeout=10s\": dial tcp 89.167.0.15:6443: connect: connection refused" interval="200ms" Jan 23 18:49:54.478653 kubelet[2416]: I0123 18:49:54.478645 2416 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:49:54.498574 kubelet[2416]: I0123 18:49:54.498480 2416 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:49:54.499581 kubelet[2416]: I0123 18:49:54.499571 2416 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:49:54.499816 kubelet[2416]: I0123 18:49:54.499625 2416 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:49:54.499816 kubelet[2416]: I0123 18:49:54.499641 2416 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:49:54.499816 kubelet[2416]: I0123 18:49:54.499646 2416 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:49:54.499816 kubelet[2416]: E0123 18:49:54.499674 2416 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:49:54.507249 kubelet[2416]: E0123 18:49:54.507234 2416 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://89.167.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 89.167.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:49:54.514368 kubelet[2416]: I0123 18:49:54.514339 2416 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:49:54.514425 kubelet[2416]: I0123 18:49:54.514367 2416 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:49:54.514425 kubelet[2416]: I0123 18:49:54.514391 2416 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:49:54.516506 kubelet[2416]: I0123 18:49:54.516415 2416 policy_none.go:49] "None policy: Start" Jan 23 18:49:54.516506 kubelet[2416]: I0123 18:49:54.516465 2416 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:49:54.516506 kubelet[2416]: I0123 18:49:54.516485 2416 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:49:54.522455 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:49:54.532163 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:49:54.535442 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:49:54.547164 kubelet[2416]: E0123 18:49:54.547095 2416 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:49:54.547547 kubelet[2416]: I0123 18:49:54.547525 2416 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:49:54.547620 kubelet[2416]: I0123 18:49:54.547537 2416 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:49:54.548658 kubelet[2416]: I0123 18:49:54.548554 2416 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:49:54.549753 kubelet[2416]: E0123 18:49:54.549682 2416 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:49:54.549858 kubelet[2416]: E0123 18:49:54.549815 2416 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-3-1-de7581f71a\" not found" Jan 23 18:49:54.620935 systemd[1]: Created slice kubepods-burstable-pod0896b4d7621f31b7cff35086f9200ff3.slice - libcontainer container kubepods-burstable-pod0896b4d7621f31b7cff35086f9200ff3.slice. Jan 23 18:49:54.631851 kubelet[2416]: E0123 18:49:54.631760 2416 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-de7581f71a\" not found" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.638861 systemd[1]: Created slice kubepods-burstable-pod6009dbea8d329f9915d828952b8c6794.slice - libcontainer container kubepods-burstable-pod6009dbea8d329f9915d828952b8c6794.slice. Jan 23 18:49:54.643535 kubelet[2416]: E0123 18:49:54.643503 2416 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-de7581f71a\" not found" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.648054 systemd[1]: Created slice kubepods-burstable-pod9a77fbddae1d8d0585f58f21ae451912.slice - libcontainer container kubepods-burstable-pod9a77fbddae1d8d0585f58f21ae451912.slice. Jan 23 18:49:54.652653 kubelet[2416]: I0123 18:49:54.652583 2416 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.652897 kubelet[2416]: E0123 18:49:54.652871 2416 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-de7581f71a\" not found" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.653442 kubelet[2416]: E0123 18:49:54.653392 2416 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.0.15:6443/api/v1/nodes\": dial tcp 89.167.0.15:6443: connect: connection refused" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.679220 kubelet[2416]: E0123 18:49:54.679047 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-1-de7581f71a?timeout=10s\": dial tcp 89.167.0.15:6443: connect: connection refused" interval="400ms" Jan 23 18:49:54.778815 kubelet[2416]: I0123 18:49:54.778705 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6009dbea8d329f9915d828952b8c6794-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-1-de7581f71a\" (UID: \"6009dbea8d329f9915d828952b8c6794\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.778815 kubelet[2416]: I0123 18:49:54.778794 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a77fbddae1d8d0585f58f21ae451912-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" (UID: \"9a77fbddae1d8d0585f58f21ae451912\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.778815 kubelet[2416]: I0123 18:49:54.778822 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0896b4d7621f31b7cff35086f9200ff3-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-1-de7581f71a\" (UID: \"0896b4d7621f31b7cff35086f9200ff3\") " pod="kube-system/kube-scheduler-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.779068 kubelet[2416]: I0123 18:49:54.778854 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6009dbea8d329f9915d828952b8c6794-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-1-de7581f71a\" (UID: \"6009dbea8d329f9915d828952b8c6794\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.779068 kubelet[2416]: I0123 18:49:54.778878 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6009dbea8d329f9915d828952b8c6794-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-1-de7581f71a\" (UID: \"6009dbea8d329f9915d828952b8c6794\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.779068 kubelet[2416]: I0123 18:49:54.778903 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a77fbddae1d8d0585f58f21ae451912-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" (UID: \"9a77fbddae1d8d0585f58f21ae451912\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.779068 kubelet[2416]: I0123 18:49:54.778925 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a77fbddae1d8d0585f58f21ae451912-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" (UID: \"9a77fbddae1d8d0585f58f21ae451912\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.779068 kubelet[2416]: I0123 18:49:54.778947 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a77fbddae1d8d0585f58f21ae451912-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" (UID: \"9a77fbddae1d8d0585f58f21ae451912\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.779305 kubelet[2416]: I0123 18:49:54.778985 2416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a77fbddae1d8d0585f58f21ae451912-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" (UID: \"9a77fbddae1d8d0585f58f21ae451912\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.856282 kubelet[2416]: I0123 18:49:54.856227 2416 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.856693 kubelet[2416]: E0123 18:49:54.856659 2416 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.0.15:6443/api/v1/nodes\": dial tcp 89.167.0.15:6443: connect: connection refused" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:54.935435 containerd[1630]: time="2026-01-23T18:49:54.934703148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-1-de7581f71a,Uid:0896b4d7621f31b7cff35086f9200ff3,Namespace:kube-system,Attempt:0,}" Jan 23 18:49:54.945328 containerd[1630]: time="2026-01-23T18:49:54.944629390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-1-de7581f71a,Uid:6009dbea8d329f9915d828952b8c6794,Namespace:kube-system,Attempt:0,}" Jan 23 18:49:54.962474 containerd[1630]: time="2026-01-23T18:49:54.962382036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-1-de7581f71a,Uid:9a77fbddae1d8d0585f58f21ae451912,Namespace:kube-system,Attempt:0,}" Jan 23 18:49:54.988439 containerd[1630]: time="2026-01-23T18:49:54.986420912Z" level=info msg="connecting to shim fe1a81e76f89d3015f416a33bac027f842d7efdbec236f2a4e8f5d0392320082" address="unix:///run/containerd/s/e2d9535b681d7a4f9a185b4b6c19df623e82b964f44c394478f6961f0579e5b4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:49:55.006265 containerd[1630]: time="2026-01-23T18:49:55.006074277Z" level=info msg="connecting to shim 447e41eff53bd09219788768d3fd234d974d333a557f1ecafd95640df55d9050" address="unix:///run/containerd/s/da2587a2993f5455c14158ead0729ee599e7d918017c0fe0794f851bf6363b47" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:49:55.022842 containerd[1630]: time="2026-01-23T18:49:55.022787405Z" level=info msg="connecting to shim d98ef84b90105b75ade80b0a2caa033de1b91c4f9f8c594f7650803bbaf236bc" address="unix:///run/containerd/s/f18f62548c96c603771666361a6339ac8cfe8a3b98bd1ef9792bd16e201ad4f4" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:49:55.055561 systemd[1]: Started cri-containerd-447e41eff53bd09219788768d3fd234d974d333a557f1ecafd95640df55d9050.scope - libcontainer container 447e41eff53bd09219788768d3fd234d974d333a557f1ecafd95640df55d9050. Jan 23 18:49:55.068362 systemd[1]: Started cri-containerd-fe1a81e76f89d3015f416a33bac027f842d7efdbec236f2a4e8f5d0392320082.scope - libcontainer container fe1a81e76f89d3015f416a33bac027f842d7efdbec236f2a4e8f5d0392320082. Jan 23 18:49:55.074326 systemd[1]: Started cri-containerd-d98ef84b90105b75ade80b0a2caa033de1b91c4f9f8c594f7650803bbaf236bc.scope - libcontainer container d98ef84b90105b75ade80b0a2caa033de1b91c4f9f8c594f7650803bbaf236bc. Jan 23 18:49:55.079715 kubelet[2416]: E0123 18:49:55.079646 2416 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-1-de7581f71a?timeout=10s\": dial tcp 89.167.0.15:6443: connect: connection refused" interval="800ms" Jan 23 18:49:55.141160 containerd[1630]: time="2026-01-23T18:49:55.141094710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-3-1-de7581f71a,Uid:0896b4d7621f31b7cff35086f9200ff3,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe1a81e76f89d3015f416a33bac027f842d7efdbec236f2a4e8f5d0392320082\"" Jan 23 18:49:55.143045 containerd[1630]: time="2026-01-23T18:49:55.142979357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-3-1-de7581f71a,Uid:6009dbea8d329f9915d828952b8c6794,Namespace:kube-system,Attempt:0,} returns sandbox id \"447e41eff53bd09219788768d3fd234d974d333a557f1ecafd95640df55d9050\"" Jan 23 18:49:55.146221 containerd[1630]: time="2026-01-23T18:49:55.146171626Z" level=info msg="CreateContainer within sandbox \"fe1a81e76f89d3015f416a33bac027f842d7efdbec236f2a4e8f5d0392320082\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:49:55.146755 containerd[1630]: time="2026-01-23T18:49:55.146703390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-3-1-de7581f71a,Uid:9a77fbddae1d8d0585f58f21ae451912,Namespace:kube-system,Attempt:0,} returns sandbox id \"d98ef84b90105b75ade80b0a2caa033de1b91c4f9f8c594f7650803bbaf236bc\"" Jan 23 18:49:55.147416 containerd[1630]: time="2026-01-23T18:49:55.147401430Z" level=info msg="CreateContainer within sandbox \"447e41eff53bd09219788768d3fd234d974d333a557f1ecafd95640df55d9050\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:49:55.152007 containerd[1630]: time="2026-01-23T18:49:55.151983463Z" level=info msg="Container 51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:49:55.162673 containerd[1630]: time="2026-01-23T18:49:55.162656410Z" level=info msg="CreateContainer within sandbox \"d98ef84b90105b75ade80b0a2caa033de1b91c4f9f8c594f7650803bbaf236bc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:49:55.173335 containerd[1630]: time="2026-01-23T18:49:55.173301039Z" level=info msg="CreateContainer within sandbox \"fe1a81e76f89d3015f416a33bac027f842d7efdbec236f2a4e8f5d0392320082\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d\"" Jan 23 18:49:55.173952 containerd[1630]: time="2026-01-23T18:49:55.173928024Z" level=info msg="StartContainer for \"51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d\"" Jan 23 18:49:55.174966 containerd[1630]: time="2026-01-23T18:49:55.174926023Z" level=info msg="connecting to shim 51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d" address="unix:///run/containerd/s/e2d9535b681d7a4f9a185b4b6c19df623e82b964f44c394478f6961f0579e5b4" protocol=ttrpc version=3 Jan 23 18:49:55.175807 containerd[1630]: time="2026-01-23T18:49:55.175552616Z" level=info msg="Container 70374761da1fec58e3ced7fb4ad3d95125f0579b38bf061083bee56624766dc7: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:49:55.177478 containerd[1630]: time="2026-01-23T18:49:55.177451872Z" level=info msg="Container 77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:49:55.187024 containerd[1630]: time="2026-01-23T18:49:55.186164135Z" level=info msg="CreateContainer within sandbox \"447e41eff53bd09219788768d3fd234d974d333a557f1ecafd95640df55d9050\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"70374761da1fec58e3ced7fb4ad3d95125f0579b38bf061083bee56624766dc7\"" Jan 23 18:49:55.188404 containerd[1630]: time="2026-01-23T18:49:55.187873350Z" level=info msg="StartContainer for \"70374761da1fec58e3ced7fb4ad3d95125f0579b38bf061083bee56624766dc7\"" Jan 23 18:49:55.190868 containerd[1630]: time="2026-01-23T18:49:55.190841378Z" level=info msg="connecting to shim 70374761da1fec58e3ced7fb4ad3d95125f0579b38bf061083bee56624766dc7" address="unix:///run/containerd/s/da2587a2993f5455c14158ead0729ee599e7d918017c0fe0794f851bf6363b47" protocol=ttrpc version=3 Jan 23 18:49:55.191944 containerd[1630]: time="2026-01-23T18:49:55.191287299Z" level=info msg="CreateContainer within sandbox \"d98ef84b90105b75ade80b0a2caa033de1b91c4f9f8c594f7650803bbaf236bc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4\"" Jan 23 18:49:55.191944 containerd[1630]: time="2026-01-23T18:49:55.191568055Z" level=info msg="StartContainer for \"77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4\"" Jan 23 18:49:55.192242 containerd[1630]: time="2026-01-23T18:49:55.192220917Z" level=info msg="connecting to shim 77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4" address="unix:///run/containerd/s/f18f62548c96c603771666361a6339ac8cfe8a3b98bd1ef9792bd16e201ad4f4" protocol=ttrpc version=3 Jan 23 18:49:55.193349 systemd[1]: Started cri-containerd-51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d.scope - libcontainer container 51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d. Jan 23 18:49:55.229399 systemd[1]: Started cri-containerd-70374761da1fec58e3ced7fb4ad3d95125f0579b38bf061083bee56624766dc7.scope - libcontainer container 70374761da1fec58e3ced7fb4ad3d95125f0579b38bf061083bee56624766dc7. Jan 23 18:49:55.230240 systemd[1]: Started cri-containerd-77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4.scope - libcontainer container 77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4. Jan 23 18:49:55.247074 containerd[1630]: time="2026-01-23T18:49:55.247023677Z" level=info msg="StartContainer for \"51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d\" returns successfully" Jan 23 18:49:55.259565 kubelet[2416]: I0123 18:49:55.259315 2416 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:55.259565 kubelet[2416]: E0123 18:49:55.259544 2416 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.0.15:6443/api/v1/nodes\": dial tcp 89.167.0.15:6443: connect: connection refused" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:55.279713 containerd[1630]: time="2026-01-23T18:49:55.279694188Z" level=info msg="StartContainer for \"70374761da1fec58e3ced7fb4ad3d95125f0579b38bf061083bee56624766dc7\" returns successfully" Jan 23 18:49:55.303347 containerd[1630]: time="2026-01-23T18:49:55.300313144Z" level=info msg="StartContainer for \"77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4\" returns successfully" Jan 23 18:49:55.513624 kubelet[2416]: E0123 18:49:55.513160 2416 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-de7581f71a\" not found" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:55.514778 kubelet[2416]: E0123 18:49:55.514676 2416 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-de7581f71a\" not found" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:55.517603 kubelet[2416]: E0123 18:49:55.517590 2416 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-3-1-de7581f71a\" not found" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.062214 kubelet[2416]: I0123 18:49:56.061599 2416 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.252505 kubelet[2416]: E0123 18:49:56.252416 2416 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-3-1-de7581f71a\" not found" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.431277 kubelet[2416]: I0123 18:49:56.431053 2416 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.431277 kubelet[2416]: E0123 18:49:56.431097 2416 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-3-1-de7581f71a\": node \"ci-4459-2-3-1-de7581f71a\" not found" Jan 23 18:49:56.452772 kubelet[2416]: I0123 18:49:56.452594 2416 apiserver.go:52] "Watching apiserver" Jan 23 18:49:56.476708 kubelet[2416]: I0123 18:49:56.476670 2416 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:49:56.477860 kubelet[2416]: I0123 18:49:56.477716 2416 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.484741 kubelet[2416]: E0123 18:49:56.484728 2416 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.484890 kubelet[2416]: I0123 18:49:56.484815 2416 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.487336 kubelet[2416]: E0123 18:49:56.487284 2416 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-1-de7581f71a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.487647 kubelet[2416]: I0123 18:49:56.487395 2416 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.488299 kubelet[2416]: E0123 18:49:56.488285 2416 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-1-de7581f71a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.517323 kubelet[2416]: I0123 18:49:56.517202 2416 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.517558 kubelet[2416]: I0123 18:49:56.517549 2416 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.518922 kubelet[2416]: E0123 18:49:56.518848 2416 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-1-de7581f71a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:56.519822 kubelet[2416]: E0123 18:49:56.519712 2416 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-1-de7581f71a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:57.520585 kubelet[2416]: I0123 18:49:57.520523 2416 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:58.450766 systemd[1]: Reload requested from client PID 2694 ('systemctl') (unit session-7.scope)... Jan 23 18:49:58.450826 systemd[1]: Reloading... Jan 23 18:49:58.613388 zram_generator::config[2738]: No configuration found. Jan 23 18:49:58.837408 systemd[1]: Reloading finished in 385 ms. Jan 23 18:49:58.867647 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:58.886563 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:49:58.886761 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:58.886799 systemd[1]: kubelet.service: Consumed 961ms CPU time, 131.4M memory peak. Jan 23 18:49:58.889298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:49:59.051744 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:49:59.060492 (kubelet)[2789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:49:59.090397 kubelet[2789]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:49:59.090397 kubelet[2789]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:49:59.090397 kubelet[2789]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:49:59.090397 kubelet[2789]: I0123 18:49:59.090269 2789 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:49:59.096934 kubelet[2789]: I0123 18:49:59.096741 2789 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 18:49:59.096934 kubelet[2789]: I0123 18:49:59.096757 2789 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:49:59.097132 kubelet[2789]: I0123 18:49:59.097124 2789 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:49:59.099902 kubelet[2789]: I0123 18:49:59.099250 2789 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 18:49:59.101204 kubelet[2789]: I0123 18:49:59.101048 2789 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:49:59.103792 kubelet[2789]: I0123 18:49:59.103775 2789 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:49:59.107540 kubelet[2789]: I0123 18:49:59.107525 2789 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 18:49:59.107950 kubelet[2789]: I0123 18:49:59.107919 2789 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:49:59.108254 kubelet[2789]: I0123 18:49:59.107950 2789 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-3-1-de7581f71a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:49:59.108322 kubelet[2789]: I0123 18:49:59.108257 2789 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:49:59.108322 kubelet[2789]: I0123 18:49:59.108265 2789 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 18:49:59.108322 kubelet[2789]: I0123 18:49:59.108302 2789 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:49:59.108547 kubelet[2789]: I0123 18:49:59.108527 2789 kubelet.go:480] "Attempting to sync node with API server" Jan 23 18:49:59.108572 kubelet[2789]: I0123 18:49:59.108547 2789 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:49:59.108572 kubelet[2789]: I0123 18:49:59.108568 2789 kubelet.go:386] "Adding apiserver pod source" Jan 23 18:49:59.108615 kubelet[2789]: I0123 18:49:59.108579 2789 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:49:59.113204 kubelet[2789]: I0123 18:49:59.112549 2789 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:49:59.113204 kubelet[2789]: I0123 18:49:59.112844 2789 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:49:59.115155 kubelet[2789]: I0123 18:49:59.115144 2789 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 18:49:59.115256 kubelet[2789]: I0123 18:49:59.115249 2789 server.go:1289] "Started kubelet" Jan 23 18:49:59.117522 kubelet[2789]: I0123 18:49:59.117509 2789 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:49:59.121789 kubelet[2789]: I0123 18:49:59.121760 2789 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:49:59.122444 kubelet[2789]: I0123 18:49:59.122430 2789 server.go:317] "Adding debug handlers to kubelet server" Jan 23 18:49:59.125375 kubelet[2789]: I0123 18:49:59.125363 2789 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 18:49:59.125684 kubelet[2789]: I0123 18:49:59.125654 2789 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:49:59.125833 kubelet[2789]: I0123 18:49:59.125808 2789 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:49:59.126091 kubelet[2789]: I0123 18:49:59.126072 2789 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:49:59.127300 kubelet[2789]: I0123 18:49:59.127256 2789 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:49:59.127439 kubelet[2789]: I0123 18:49:59.127416 2789 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:49:59.129027 kubelet[2789]: I0123 18:49:59.128994 2789 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:49:59.134518 kubelet[2789]: I0123 18:49:59.134326 2789 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 18:49:59.134518 kubelet[2789]: I0123 18:49:59.134421 2789 reconciler.go:26] "Reconciler: start to sync state" Jan 23 18:49:59.137783 kubelet[2789]: I0123 18:49:59.137764 2789 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 18:49:59.138817 kubelet[2789]: I0123 18:49:59.138791 2789 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 18:49:59.138870 kubelet[2789]: I0123 18:49:59.138864 2789 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 18:49:59.138909 kubelet[2789]: I0123 18:49:59.138903 2789 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:49:59.138938 kubelet[2789]: I0123 18:49:59.138933 2789 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 18:49:59.139005 kubelet[2789]: E0123 18:49:59.138994 2789 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:49:59.174106 kubelet[2789]: I0123 18:49:59.174084 2789 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:49:59.174519 kubelet[2789]: I0123 18:49:59.174277 2789 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:49:59.174519 kubelet[2789]: I0123 18:49:59.174306 2789 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:49:59.174519 kubelet[2789]: I0123 18:49:59.174393 2789 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:49:59.174519 kubelet[2789]: I0123 18:49:59.174399 2789 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:49:59.174519 kubelet[2789]: I0123 18:49:59.174410 2789 policy_none.go:49] "None policy: Start" Jan 23 18:49:59.174519 kubelet[2789]: I0123 18:49:59.174418 2789 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 18:49:59.174519 kubelet[2789]: I0123 18:49:59.174426 2789 state_mem.go:35] "Initializing new in-memory state store" Jan 23 18:49:59.174519 kubelet[2789]: I0123 18:49:59.174480 2789 state_mem.go:75] "Updated machine memory state" Jan 23 18:49:59.177698 kubelet[2789]: E0123 18:49:59.177649 2789 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:49:59.177779 kubelet[2789]: I0123 18:49:59.177760 2789 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:49:59.177801 kubelet[2789]: I0123 18:49:59.177775 2789 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:49:59.178101 kubelet[2789]: I0123 18:49:59.178085 2789 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:49:59.180529 kubelet[2789]: E0123 18:49:59.180224 2789 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:49:59.239916 kubelet[2789]: I0123 18:49:59.239882 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.240470 kubelet[2789]: I0123 18:49:59.240354 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.242297 kubelet[2789]: I0123 18:49:59.240481 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.248243 kubelet[2789]: E0123 18:49:59.248206 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-1-de7581f71a\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.283501 kubelet[2789]: I0123 18:49:59.283454 2789 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.292881 kubelet[2789]: I0123 18:49:59.292808 2789 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.292994 kubelet[2789]: I0123 18:49:59.292905 2789 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.436487 kubelet[2789]: I0123 18:49:59.436341 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6009dbea8d329f9915d828952b8c6794-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-3-1-de7581f71a\" (UID: \"6009dbea8d329f9915d828952b8c6794\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.436487 kubelet[2789]: I0123 18:49:59.436398 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9a77fbddae1d8d0585f58f21ae451912-ca-certs\") pod \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" (UID: \"9a77fbddae1d8d0585f58f21ae451912\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.436487 kubelet[2789]: I0123 18:49:59.436428 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9a77fbddae1d8d0585f58f21ae451912-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" (UID: \"9a77fbddae1d8d0585f58f21ae451912\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.436487 kubelet[2789]: I0123 18:49:59.436450 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9a77fbddae1d8d0585f58f21ae451912-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" (UID: \"9a77fbddae1d8d0585f58f21ae451912\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.436487 kubelet[2789]: I0123 18:49:59.436475 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0896b4d7621f31b7cff35086f9200ff3-kubeconfig\") pod \"kube-scheduler-ci-4459-2-3-1-de7581f71a\" (UID: \"0896b4d7621f31b7cff35086f9200ff3\") " pod="kube-system/kube-scheduler-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.437016 kubelet[2789]: I0123 18:49:59.436499 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6009dbea8d329f9915d828952b8c6794-ca-certs\") pod \"kube-apiserver-ci-4459-2-3-1-de7581f71a\" (UID: \"6009dbea8d329f9915d828952b8c6794\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.437016 kubelet[2789]: I0123 18:49:59.436523 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6009dbea8d329f9915d828952b8c6794-k8s-certs\") pod \"kube-apiserver-ci-4459-2-3-1-de7581f71a\" (UID: \"6009dbea8d329f9915d828952b8c6794\") " pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.437016 kubelet[2789]: I0123 18:49:59.436544 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9a77fbddae1d8d0585f58f21ae451912-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" (UID: \"9a77fbddae1d8d0585f58f21ae451912\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:49:59.437016 kubelet[2789]: I0123 18:49:59.436568 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9a77fbddae1d8d0585f58f21ae451912-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" (UID: \"9a77fbddae1d8d0585f58f21ae451912\") " pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:50:00.110284 kubelet[2789]: I0123 18:50:00.110231 2789 apiserver.go:52] "Watching apiserver" Jan 23 18:50:00.134602 kubelet[2789]: I0123 18:50:00.134431 2789 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 18:50:00.163009 kubelet[2789]: I0123 18:50:00.162494 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:50:00.163009 kubelet[2789]: I0123 18:50:00.162570 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:50:00.163604 kubelet[2789]: I0123 18:50:00.163245 2789 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-3-1-de7581f71a" Jan 23 18:50:00.171670 kubelet[2789]: E0123 18:50:00.171611 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-3-1-de7581f71a\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" Jan 23 18:50:00.172741 kubelet[2789]: E0123 18:50:00.172701 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-3-1-de7581f71a\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" Jan 23 18:50:00.173220 kubelet[2789]: E0123 18:50:00.173166 2789 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-3-1-de7581f71a\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-3-1-de7581f71a" Jan 23 18:50:00.193917 kubelet[2789]: I0123 18:50:00.193709 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-3-1-de7581f71a" podStartSLOduration=3.193663616 podStartE2EDuration="3.193663616s" podCreationTimestamp="2026-01-23 18:49:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:00.193374414 +0000 UTC m=+1.129318623" watchObservedRunningTime="2026-01-23 18:50:00.193663616 +0000 UTC m=+1.129607815" Jan 23 18:50:00.213454 kubelet[2789]: I0123 18:50:00.213226 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-3-1-de7581f71a" podStartSLOduration=1.213174744 podStartE2EDuration="1.213174744s" podCreationTimestamp="2026-01-23 18:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:00.202974753 +0000 UTC m=+1.138918952" watchObservedRunningTime="2026-01-23 18:50:00.213174744 +0000 UTC m=+1.149118953" Jan 23 18:50:03.689543 kubelet[2789]: I0123 18:50:03.689451 2789 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:50:03.690652 containerd[1630]: time="2026-01-23T18:50:03.689882265Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:50:03.691509 kubelet[2789]: I0123 18:50:03.691437 2789 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:50:04.031339 kubelet[2789]: I0123 18:50:04.030720 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-3-1-de7581f71a" podStartSLOduration=5.030695569 podStartE2EDuration="5.030695569s" podCreationTimestamp="2026-01-23 18:49:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:00.214088891 +0000 UTC m=+1.150033100" watchObservedRunningTime="2026-01-23 18:50:04.030695569 +0000 UTC m=+4.966639778" Jan 23 18:50:04.050293 systemd[1]: Created slice kubepods-besteffort-pod5b8d5ea6_8d0d_42f2_96f0_687a5d0c4741.slice - libcontainer container kubepods-besteffort-pod5b8d5ea6_8d0d_42f2_96f0_687a5d0c4741.slice. Jan 23 18:50:04.068708 kubelet[2789]: I0123 18:50:04.068642 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741-kube-proxy\") pod \"kube-proxy-2vwv6\" (UID: \"5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741\") " pod="kube-system/kube-proxy-2vwv6" Jan 23 18:50:04.068708 kubelet[2789]: I0123 18:50:04.068703 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741-xtables-lock\") pod \"kube-proxy-2vwv6\" (UID: \"5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741\") " pod="kube-system/kube-proxy-2vwv6" Jan 23 18:50:04.068708 kubelet[2789]: I0123 18:50:04.068734 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741-lib-modules\") pod \"kube-proxy-2vwv6\" (UID: \"5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741\") " pod="kube-system/kube-proxy-2vwv6" Jan 23 18:50:04.068995 kubelet[2789]: I0123 18:50:04.068759 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grckp\" (UniqueName: \"kubernetes.io/projected/5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741-kube-api-access-grckp\") pod \"kube-proxy-2vwv6\" (UID: \"5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741\") " pod="kube-system/kube-proxy-2vwv6" Jan 23 18:50:04.178819 kubelet[2789]: E0123 18:50:04.178634 2789 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 18:50:04.178819 kubelet[2789]: E0123 18:50:04.178809 2789 projected.go:194] Error preparing data for projected volume kube-api-access-grckp for pod kube-system/kube-proxy-2vwv6: configmap "kube-root-ca.crt" not found Jan 23 18:50:04.179061 kubelet[2789]: E0123 18:50:04.178922 2789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741-kube-api-access-grckp podName:5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741 nodeName:}" failed. No retries permitted until 2026-01-23 18:50:04.678897706 +0000 UTC m=+5.614841905 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-grckp" (UniqueName: "kubernetes.io/projected/5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741-kube-api-access-grckp") pod "kube-proxy-2vwv6" (UID: "5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741") : configmap "kube-root-ca.crt" not found Jan 23 18:50:04.954585 systemd[1]: Created slice kubepods-besteffort-poda7606db7_4374_48f1_bff6_9b54df03699b.slice - libcontainer container kubepods-besteffort-poda7606db7_4374_48f1_bff6_9b54df03699b.slice. Jan 23 18:50:04.960792 containerd[1630]: time="2026-01-23T18:50:04.960753617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2vwv6,Uid:5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:04.975199 kubelet[2789]: I0123 18:50:04.973742 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv5wk\" (UniqueName: \"kubernetes.io/projected/a7606db7-4374-48f1-bff6-9b54df03699b-kube-api-access-bv5wk\") pod \"tigera-operator-7dcd859c48-crghv\" (UID: \"a7606db7-4374-48f1-bff6-9b54df03699b\") " pod="tigera-operator/tigera-operator-7dcd859c48-crghv" Jan 23 18:50:04.975199 kubelet[2789]: I0123 18:50:04.974063 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a7606db7-4374-48f1-bff6-9b54df03699b-var-lib-calico\") pod \"tigera-operator-7dcd859c48-crghv\" (UID: \"a7606db7-4374-48f1-bff6-9b54df03699b\") " pod="tigera-operator/tigera-operator-7dcd859c48-crghv" Jan 23 18:50:04.983895 containerd[1630]: time="2026-01-23T18:50:04.983802672Z" level=info msg="connecting to shim 258f7220874459daa5fb98bb54e0425f18a611bf0d1ed904e89b1967cd1d26fd" address="unix:///run/containerd/s/b47278a8bd7f4d48cae580856d530e9d270bac18ff6687aa80b7c515788d4297" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:05.014305 systemd[1]: Started cri-containerd-258f7220874459daa5fb98bb54e0425f18a611bf0d1ed904e89b1967cd1d26fd.scope - libcontainer container 258f7220874459daa5fb98bb54e0425f18a611bf0d1ed904e89b1967cd1d26fd. Jan 23 18:50:05.043797 containerd[1630]: time="2026-01-23T18:50:05.043768386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2vwv6,Uid:5b8d5ea6-8d0d-42f2-96f0-687a5d0c4741,Namespace:kube-system,Attempt:0,} returns sandbox id \"258f7220874459daa5fb98bb54e0425f18a611bf0d1ed904e89b1967cd1d26fd\"" Jan 23 18:50:05.049352 containerd[1630]: time="2026-01-23T18:50:05.049335327Z" level=info msg="CreateContainer within sandbox \"258f7220874459daa5fb98bb54e0425f18a611bf0d1ed904e89b1967cd1d26fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:50:05.061113 containerd[1630]: time="2026-01-23T18:50:05.060621247Z" level=info msg="Container 434ecc857b8fbfe2c3ecb31117be737648515c2c10280477958c4002427f2ca5: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:05.062841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1416194915.mount: Deactivated successfully. Jan 23 18:50:05.074721 containerd[1630]: time="2026-01-23T18:50:05.074690148Z" level=info msg="CreateContainer within sandbox \"258f7220874459daa5fb98bb54e0425f18a611bf0d1ed904e89b1967cd1d26fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"434ecc857b8fbfe2c3ecb31117be737648515c2c10280477958c4002427f2ca5\"" Jan 23 18:50:05.080069 containerd[1630]: time="2026-01-23T18:50:05.078522665Z" level=info msg="StartContainer for \"434ecc857b8fbfe2c3ecb31117be737648515c2c10280477958c4002427f2ca5\"" Jan 23 18:50:05.082198 containerd[1630]: time="2026-01-23T18:50:05.082050211Z" level=info msg="connecting to shim 434ecc857b8fbfe2c3ecb31117be737648515c2c10280477958c4002427f2ca5" address="unix:///run/containerd/s/b47278a8bd7f4d48cae580856d530e9d270bac18ff6687aa80b7c515788d4297" protocol=ttrpc version=3 Jan 23 18:50:05.104291 systemd[1]: Started cri-containerd-434ecc857b8fbfe2c3ecb31117be737648515c2c10280477958c4002427f2ca5.scope - libcontainer container 434ecc857b8fbfe2c3ecb31117be737648515c2c10280477958c4002427f2ca5. Jan 23 18:50:05.177474 containerd[1630]: time="2026-01-23T18:50:05.177417525Z" level=info msg="StartContainer for \"434ecc857b8fbfe2c3ecb31117be737648515c2c10280477958c4002427f2ca5\" returns successfully" Jan 23 18:50:05.259749 containerd[1630]: time="2026-01-23T18:50:05.259572553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-crghv,Uid:a7606db7-4374-48f1-bff6-9b54df03699b,Namespace:tigera-operator,Attempt:0,}" Jan 23 18:50:05.287374 containerd[1630]: time="2026-01-23T18:50:05.287301549Z" level=info msg="connecting to shim 243d5cc1287be399bb645e36b9297227d0103df20e2ded4a6da0e41b303c3177" address="unix:///run/containerd/s/8296365632cf01a2f90a51743872576d49e9025a8507b3ae716b1c30cbb6b16c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:05.312278 systemd[1]: Started cri-containerd-243d5cc1287be399bb645e36b9297227d0103df20e2ded4a6da0e41b303c3177.scope - libcontainer container 243d5cc1287be399bb645e36b9297227d0103df20e2ded4a6da0e41b303c3177. Jan 23 18:50:05.369973 containerd[1630]: time="2026-01-23T18:50:05.369938926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-crghv,Uid:a7606db7-4374-48f1-bff6-9b54df03699b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"243d5cc1287be399bb645e36b9297227d0103df20e2ded4a6da0e41b303c3177\"" Jan 23 18:50:05.372364 containerd[1630]: time="2026-01-23T18:50:05.372350223Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 18:50:05.519500 update_engine[1610]: I20260123 18:50:05.519265 1610 update_attempter.cc:509] Updating boot flags... Jan 23 18:50:07.466571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1857492504.mount: Deactivated successfully. Jan 23 18:50:08.078477 containerd[1630]: time="2026-01-23T18:50:08.078412940Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:08.079555 containerd[1630]: time="2026-01-23T18:50:08.079526904Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 18:50:08.080573 containerd[1630]: time="2026-01-23T18:50:08.080536879Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:08.082157 containerd[1630]: time="2026-01-23T18:50:08.082124732Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:08.082814 containerd[1630]: time="2026-01-23T18:50:08.082529383Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.710160574s" Jan 23 18:50:08.082814 containerd[1630]: time="2026-01-23T18:50:08.082555970Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 18:50:08.085650 containerd[1630]: time="2026-01-23T18:50:08.085609962Z" level=info msg="CreateContainer within sandbox \"243d5cc1287be399bb645e36b9297227d0103df20e2ded4a6da0e41b303c3177\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 18:50:08.091130 containerd[1630]: time="2026-01-23T18:50:08.090761856Z" level=info msg="Container f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:08.107154 containerd[1630]: time="2026-01-23T18:50:08.107118314Z" level=info msg="CreateContainer within sandbox \"243d5cc1287be399bb645e36b9297227d0103df20e2ded4a6da0e41b303c3177\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0\"" Jan 23 18:50:08.107559 containerd[1630]: time="2026-01-23T18:50:08.107497238Z" level=info msg="StartContainer for \"f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0\"" Jan 23 18:50:08.108110 containerd[1630]: time="2026-01-23T18:50:08.108090189Z" level=info msg="connecting to shim f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0" address="unix:///run/containerd/s/8296365632cf01a2f90a51743872576d49e9025a8507b3ae716b1c30cbb6b16c" protocol=ttrpc version=3 Jan 23 18:50:08.129271 systemd[1]: Started cri-containerd-f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0.scope - libcontainer container f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0. Jan 23 18:50:08.156351 containerd[1630]: time="2026-01-23T18:50:08.156318002Z" level=info msg="StartContainer for \"f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0\" returns successfully" Jan 23 18:50:08.199028 kubelet[2789]: I0123 18:50:08.198634 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2vwv6" podStartSLOduration=4.198619241 podStartE2EDuration="4.198619241s" podCreationTimestamp="2026-01-23 18:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:06.197448174 +0000 UTC m=+7.133392383" watchObservedRunningTime="2026-01-23 18:50:08.198619241 +0000 UTC m=+9.134563410" Jan 23 18:50:12.916350 kubelet[2789]: I0123 18:50:12.916269 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-crghv" podStartSLOduration=6.204411024 podStartE2EDuration="8.91624701s" podCreationTimestamp="2026-01-23 18:50:04 +0000 UTC" firstStartedPulling="2026-01-23 18:50:05.371412692 +0000 UTC m=+6.307356851" lastFinishedPulling="2026-01-23 18:50:08.083248678 +0000 UTC m=+9.019192837" observedRunningTime="2026-01-23 18:50:08.199450718 +0000 UTC m=+9.135394897" watchObservedRunningTime="2026-01-23 18:50:12.91624701 +0000 UTC m=+13.852191209" Jan 23 18:50:13.893581 sudo[1859]: pam_unix(sudo:session): session closed for user root Jan 23 18:50:14.014768 sshd[1858]: Connection closed by 20.161.92.111 port 57614 Jan 23 18:50:14.016461 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Jan 23 18:50:14.020804 systemd[1]: sshd@6-89.167.0.15:22-20.161.92.111:57614.service: Deactivated successfully. Jan 23 18:50:14.024137 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:50:14.024427 systemd[1]: session-7.scope: Consumed 6.334s CPU time, 233M memory peak. Jan 23 18:50:14.025819 systemd-logind[1606]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:50:14.028350 systemd-logind[1606]: Removed session 7. Jan 23 18:50:18.309038 systemd[1]: Created slice kubepods-besteffort-pod8e475a9b_0284_430b_999e_1531a8efdb04.slice - libcontainer container kubepods-besteffort-pod8e475a9b_0284_430b_999e_1531a8efdb04.slice. Jan 23 18:50:18.364879 kubelet[2789]: I0123 18:50:18.364790 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e475a9b-0284-430b-999e-1531a8efdb04-tigera-ca-bundle\") pod \"calico-typha-864c5b9cd5-dd4rw\" (UID: \"8e475a9b-0284-430b-999e-1531a8efdb04\") " pod="calico-system/calico-typha-864c5b9cd5-dd4rw" Jan 23 18:50:18.364879 kubelet[2789]: I0123 18:50:18.364820 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8e475a9b-0284-430b-999e-1531a8efdb04-typha-certs\") pod \"calico-typha-864c5b9cd5-dd4rw\" (UID: \"8e475a9b-0284-430b-999e-1531a8efdb04\") " pod="calico-system/calico-typha-864c5b9cd5-dd4rw" Jan 23 18:50:18.364879 kubelet[2789]: I0123 18:50:18.364837 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhqt8\" (UniqueName: \"kubernetes.io/projected/8e475a9b-0284-430b-999e-1531a8efdb04-kube-api-access-bhqt8\") pod \"calico-typha-864c5b9cd5-dd4rw\" (UID: \"8e475a9b-0284-430b-999e-1531a8efdb04\") " pod="calico-system/calico-typha-864c5b9cd5-dd4rw" Jan 23 18:50:18.428022 systemd[1]: Created slice kubepods-besteffort-podfb689241_7ce2_4d1b_b9c7_eb5c3b5264a8.slice - libcontainer container kubepods-besteffort-podfb689241_7ce2_4d1b_b9c7_eb5c3b5264a8.slice. Jan 23 18:50:18.465395 kubelet[2789]: I0123 18:50:18.465343 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-flexvol-driver-host\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.465395 kubelet[2789]: I0123 18:50:18.465393 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-xtables-lock\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.465580 kubelet[2789]: I0123 18:50:18.465424 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-cni-net-dir\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.465580 kubelet[2789]: I0123 18:50:18.465443 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-tigera-ca-bundle\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.465580 kubelet[2789]: I0123 18:50:18.465464 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-var-run-calico\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.465580 kubelet[2789]: I0123 18:50:18.465483 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69vcz\" (UniqueName: \"kubernetes.io/projected/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-kube-api-access-69vcz\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.465580 kubelet[2789]: I0123 18:50:18.465503 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-policysync\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.465747 kubelet[2789]: I0123 18:50:18.465524 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-cni-log-dir\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.465747 kubelet[2789]: I0123 18:50:18.465542 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-lib-modules\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.465747 kubelet[2789]: I0123 18:50:18.465559 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-var-lib-calico\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.465747 kubelet[2789]: I0123 18:50:18.465593 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-cni-bin-dir\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.465747 kubelet[2789]: I0123 18:50:18.465611 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8-node-certs\") pod \"calico-node-5bxtv\" (UID: \"fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8\") " pod="calico-system/calico-node-5bxtv" Jan 23 18:50:18.572411 kubelet[2789]: E0123 18:50:18.572039 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.573366 kubelet[2789]: W0123 18:50:18.573265 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.573366 kubelet[2789]: E0123 18:50:18.573326 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.580082 kubelet[2789]: E0123 18:50:18.579996 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.580082 kubelet[2789]: W0123 18:50:18.580020 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.580082 kubelet[2789]: E0123 18:50:18.580040 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.589719 kubelet[2789]: E0123 18:50:18.589659 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.590016 kubelet[2789]: W0123 18:50:18.589828 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.590016 kubelet[2789]: E0123 18:50:18.589849 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.616829 containerd[1630]: time="2026-01-23T18:50:18.616692049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-864c5b9cd5-dd4rw,Uid:8e475a9b-0284-430b-999e-1531a8efdb04,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:18.655956 containerd[1630]: time="2026-01-23T18:50:18.655916841Z" level=info msg="connecting to shim 6789c9274481e1b821cbb0c3dc6b15244541ca01da0c805ac2e8a4570d5d473a" address="unix:///run/containerd/s/6b1e2862481d459464b50040386527fd86296aafce237723c615cad69bbd58dd" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:18.684009 kubelet[2789]: E0123 18:50:18.683774 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:50:18.708548 systemd[1]: Started cri-containerd-6789c9274481e1b821cbb0c3dc6b15244541ca01da0c805ac2e8a4570d5d473a.scope - libcontainer container 6789c9274481e1b821cbb0c3dc6b15244541ca01da0c805ac2e8a4570d5d473a. Jan 23 18:50:18.731475 containerd[1630]: time="2026-01-23T18:50:18.731358135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5bxtv,Uid:fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:18.753214 containerd[1630]: time="2026-01-23T18:50:18.753166602Z" level=info msg="connecting to shim 1e15e45a76b86c476c5e01729e12506e985b4f83ae728bdae179274720d3f3aa" address="unix:///run/containerd/s/93aef2cfb2525551fdf221b35d05ec91ee6ba6af2f5c2c7a73f94891705c8b69" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:18.755193 kubelet[2789]: E0123 18:50:18.755141 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.755337 kubelet[2789]: W0123 18:50:18.755168 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.755362 kubelet[2789]: E0123 18:50:18.755339 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.755874 kubelet[2789]: E0123 18:50:18.755833 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.755874 kubelet[2789]: W0123 18:50:18.755853 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.755874 kubelet[2789]: E0123 18:50:18.755866 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.757430 kubelet[2789]: E0123 18:50:18.757342 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.757430 kubelet[2789]: W0123 18:50:18.757361 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.757430 kubelet[2789]: E0123 18:50:18.757376 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.758086 kubelet[2789]: E0123 18:50:18.758020 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.758086 kubelet[2789]: W0123 18:50:18.758040 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.758086 kubelet[2789]: E0123 18:50:18.758058 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.758835 kubelet[2789]: E0123 18:50:18.758745 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.758835 kubelet[2789]: W0123 18:50:18.758767 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.758835 kubelet[2789]: E0123 18:50:18.758785 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.760204 kubelet[2789]: E0123 18:50:18.759666 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.760204 kubelet[2789]: W0123 18:50:18.759685 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.760204 kubelet[2789]: E0123 18:50:18.759719 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.760281 kubelet[2789]: E0123 18:50:18.760235 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.760281 kubelet[2789]: W0123 18:50:18.760243 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.760281 kubelet[2789]: E0123 18:50:18.760250 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.760622 kubelet[2789]: E0123 18:50:18.760604 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.760622 kubelet[2789]: W0123 18:50:18.760616 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.760622 kubelet[2789]: E0123 18:50:18.760624 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.760995 kubelet[2789]: E0123 18:50:18.760974 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.760995 kubelet[2789]: W0123 18:50:18.760985 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.761148 kubelet[2789]: E0123 18:50:18.761038 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.761455 kubelet[2789]: E0123 18:50:18.761385 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.761455 kubelet[2789]: W0123 18:50:18.761396 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.761455 kubelet[2789]: E0123 18:50:18.761403 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.761787 kubelet[2789]: E0123 18:50:18.761767 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.761847 kubelet[2789]: W0123 18:50:18.761829 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.761847 kubelet[2789]: E0123 18:50:18.761837 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.762258 kubelet[2789]: E0123 18:50:18.762193 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.762258 kubelet[2789]: W0123 18:50:18.762204 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.762258 kubelet[2789]: E0123 18:50:18.762211 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.763029 kubelet[2789]: E0123 18:50:18.762754 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.763029 kubelet[2789]: W0123 18:50:18.762769 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.763029 kubelet[2789]: E0123 18:50:18.762786 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.763263 kubelet[2789]: E0123 18:50:18.763254 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.763476 kubelet[2789]: W0123 18:50:18.763326 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.763476 kubelet[2789]: E0123 18:50:18.763336 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.763714 kubelet[2789]: E0123 18:50:18.763705 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.763793 kubelet[2789]: W0123 18:50:18.763786 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.763887 kubelet[2789]: E0123 18:50:18.763850 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.764107 kubelet[2789]: E0123 18:50:18.764098 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.764219 kubelet[2789]: W0123 18:50:18.764151 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.764219 kubelet[2789]: E0123 18:50:18.764160 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.765830 kubelet[2789]: E0123 18:50:18.765236 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.765830 kubelet[2789]: W0123 18:50:18.765245 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.765830 kubelet[2789]: E0123 18:50:18.765252 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.765830 kubelet[2789]: E0123 18:50:18.765411 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.765830 kubelet[2789]: W0123 18:50:18.765416 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.765830 kubelet[2789]: E0123 18:50:18.765422 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.765830 kubelet[2789]: E0123 18:50:18.765581 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.765830 kubelet[2789]: W0123 18:50:18.765586 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.765830 kubelet[2789]: E0123 18:50:18.765591 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.765830 kubelet[2789]: E0123 18:50:18.765750 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.766012 kubelet[2789]: W0123 18:50:18.765756 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.766012 kubelet[2789]: E0123 18:50:18.765761 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.768250 kubelet[2789]: E0123 18:50:18.768085 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.768250 kubelet[2789]: W0123 18:50:18.768094 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.768250 kubelet[2789]: E0123 18:50:18.768102 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.768250 kubelet[2789]: I0123 18:50:18.768127 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2a5fa6c0-649d-4612-b31e-23030250d313-varrun\") pod \"csi-node-driver-hl4mg\" (UID: \"2a5fa6c0-649d-4612-b31e-23030250d313\") " pod="calico-system/csi-node-driver-hl4mg" Jan 23 18:50:18.769272 kubelet[2789]: E0123 18:50:18.769134 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.769272 kubelet[2789]: W0123 18:50:18.769144 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.769272 kubelet[2789]: E0123 18:50:18.769152 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.769272 kubelet[2789]: I0123 18:50:18.769169 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gvzz7\" (UniqueName: \"kubernetes.io/projected/2a5fa6c0-649d-4612-b31e-23030250d313-kube-api-access-gvzz7\") pod \"csi-node-driver-hl4mg\" (UID: \"2a5fa6c0-649d-4612-b31e-23030250d313\") " pod="calico-system/csi-node-driver-hl4mg" Jan 23 18:50:18.769413 kubelet[2789]: E0123 18:50:18.769404 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.769547 kubelet[2789]: W0123 18:50:18.769442 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.769547 kubelet[2789]: E0123 18:50:18.769451 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.769659 kubelet[2789]: E0123 18:50:18.769651 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.769704 kubelet[2789]: W0123 18:50:18.769697 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.769735 kubelet[2789]: E0123 18:50:18.769729 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.770046 kubelet[2789]: E0123 18:50:18.770027 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.770092 kubelet[2789]: W0123 18:50:18.770084 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.770124 kubelet[2789]: E0123 18:50:18.770117 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.770165 kubelet[2789]: I0123 18:50:18.770158 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2a5fa6c0-649d-4612-b31e-23030250d313-registration-dir\") pod \"csi-node-driver-hl4mg\" (UID: \"2a5fa6c0-649d-4612-b31e-23030250d313\") " pod="calico-system/csi-node-driver-hl4mg" Jan 23 18:50:18.770417 kubelet[2789]: E0123 18:50:18.770408 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.770464 kubelet[2789]: W0123 18:50:18.770457 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.770499 kubelet[2789]: E0123 18:50:18.770492 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.770620 kubelet[2789]: I0123 18:50:18.770610 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2a5fa6c0-649d-4612-b31e-23030250d313-socket-dir\") pod \"csi-node-driver-hl4mg\" (UID: \"2a5fa6c0-649d-4612-b31e-23030250d313\") " pod="calico-system/csi-node-driver-hl4mg" Jan 23 18:50:18.770841 kubelet[2789]: E0123 18:50:18.770832 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.770892 kubelet[2789]: W0123 18:50:18.770874 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.770892 kubelet[2789]: E0123 18:50:18.770882 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.771211 kubelet[2789]: E0123 18:50:18.771202 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.771359 kubelet[2789]: W0123 18:50:18.771350 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.771411 kubelet[2789]: E0123 18:50:18.771398 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.771825 kubelet[2789]: E0123 18:50:18.771807 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.771928 kubelet[2789]: W0123 18:50:18.771910 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.771928 kubelet[2789]: E0123 18:50:18.771919 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.772249 kubelet[2789]: I0123 18:50:18.771992 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a5fa6c0-649d-4612-b31e-23030250d313-kubelet-dir\") pod \"csi-node-driver-hl4mg\" (UID: \"2a5fa6c0-649d-4612-b31e-23030250d313\") " pod="calico-system/csi-node-driver-hl4mg" Jan 23 18:50:18.772531 kubelet[2789]: E0123 18:50:18.772468 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.772531 kubelet[2789]: W0123 18:50:18.772478 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.772531 kubelet[2789]: E0123 18:50:18.772486 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.772740 kubelet[2789]: E0123 18:50:18.772716 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.772740 kubelet[2789]: W0123 18:50:18.772724 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.772740 kubelet[2789]: E0123 18:50:18.772730 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.773647 kubelet[2789]: E0123 18:50:18.773555 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.773647 kubelet[2789]: W0123 18:50:18.773563 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.773647 kubelet[2789]: E0123 18:50:18.773569 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.773821 kubelet[2789]: E0123 18:50:18.773813 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.773895 kubelet[2789]: W0123 18:50:18.773850 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.773895 kubelet[2789]: E0123 18:50:18.773857 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.774288 kubelet[2789]: E0123 18:50:18.774143 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.774288 kubelet[2789]: W0123 18:50:18.774150 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.774288 kubelet[2789]: E0123 18:50:18.774157 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.774429 kubelet[2789]: E0123 18:50:18.774406 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.774429 kubelet[2789]: W0123 18:50:18.774414 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.774429 kubelet[2789]: E0123 18:50:18.774420 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.778288 containerd[1630]: time="2026-01-23T18:50:18.778255858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-864c5b9cd5-dd4rw,Uid:8e475a9b-0284-430b-999e-1531a8efdb04,Namespace:calico-system,Attempt:0,} returns sandbox id \"6789c9274481e1b821cbb0c3dc6b15244541ca01da0c805ac2e8a4570d5d473a\"" Jan 23 18:50:18.779252 containerd[1630]: time="2026-01-23T18:50:18.779239029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 18:50:18.780342 systemd[1]: Started cri-containerd-1e15e45a76b86c476c5e01729e12506e985b4f83ae728bdae179274720d3f3aa.scope - libcontainer container 1e15e45a76b86c476c5e01729e12506e985b4f83ae728bdae179274720d3f3aa. Jan 23 18:50:18.807733 containerd[1630]: time="2026-01-23T18:50:18.807622298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5bxtv,Uid:fb689241-7ce2-4d1b-b9c7-eb5c3b5264a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e15e45a76b86c476c5e01729e12506e985b4f83ae728bdae179274720d3f3aa\"" Jan 23 18:50:18.875260 kubelet[2789]: E0123 18:50:18.875159 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.875260 kubelet[2789]: W0123 18:50:18.875214 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.875260 kubelet[2789]: E0123 18:50:18.875235 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.875638 kubelet[2789]: E0123 18:50:18.875563 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.875638 kubelet[2789]: W0123 18:50:18.875616 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.875638 kubelet[2789]: E0123 18:50:18.875628 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.876129 kubelet[2789]: E0123 18:50:18.876067 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.876129 kubelet[2789]: W0123 18:50:18.876106 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.876129 kubelet[2789]: E0123 18:50:18.876137 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.876687 kubelet[2789]: E0123 18:50:18.876655 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.876687 kubelet[2789]: W0123 18:50:18.876685 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.876804 kubelet[2789]: E0123 18:50:18.876709 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.877313 kubelet[2789]: E0123 18:50:18.877264 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.877313 kubelet[2789]: W0123 18:50:18.877291 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.877313 kubelet[2789]: E0123 18:50:18.877310 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.877723 kubelet[2789]: E0123 18:50:18.877665 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.877723 kubelet[2789]: W0123 18:50:18.877697 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.877723 kubelet[2789]: E0123 18:50:18.877711 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.878098 kubelet[2789]: E0123 18:50:18.878068 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.878098 kubelet[2789]: W0123 18:50:18.878086 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.878098 kubelet[2789]: E0123 18:50:18.878100 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.878613 kubelet[2789]: E0123 18:50:18.878546 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.878613 kubelet[2789]: W0123 18:50:18.878572 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.878613 kubelet[2789]: E0123 18:50:18.878591 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.879044 kubelet[2789]: E0123 18:50:18.878982 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.879044 kubelet[2789]: W0123 18:50:18.879012 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.879044 kubelet[2789]: E0123 18:50:18.879032 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.879585 kubelet[2789]: E0123 18:50:18.879544 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.879585 kubelet[2789]: W0123 18:50:18.879576 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.879754 kubelet[2789]: E0123 18:50:18.879598 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.880088 kubelet[2789]: E0123 18:50:18.880042 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.880088 kubelet[2789]: W0123 18:50:18.880073 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.880224 kubelet[2789]: E0123 18:50:18.880092 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.880660 kubelet[2789]: E0123 18:50:18.880633 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.880660 kubelet[2789]: W0123 18:50:18.880659 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.880769 kubelet[2789]: E0123 18:50:18.880675 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.881088 kubelet[2789]: E0123 18:50:18.881060 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.881088 kubelet[2789]: W0123 18:50:18.881081 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.881177 kubelet[2789]: E0123 18:50:18.881095 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.881621 kubelet[2789]: E0123 18:50:18.881574 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.881621 kubelet[2789]: W0123 18:50:18.881600 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.881621 kubelet[2789]: E0123 18:50:18.881619 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.882036 kubelet[2789]: E0123 18:50:18.882000 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.882036 kubelet[2789]: W0123 18:50:18.882020 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.882036 kubelet[2789]: E0123 18:50:18.882034 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.882697 kubelet[2789]: E0123 18:50:18.882647 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.882697 kubelet[2789]: W0123 18:50:18.882674 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.882697 kubelet[2789]: E0123 18:50:18.882691 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.883375 kubelet[2789]: E0123 18:50:18.883051 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.883375 kubelet[2789]: W0123 18:50:18.883064 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.883375 kubelet[2789]: E0123 18:50:18.883078 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.883630 kubelet[2789]: E0123 18:50:18.883518 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.883630 kubelet[2789]: W0123 18:50:18.883532 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.883630 kubelet[2789]: E0123 18:50:18.883547 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.884119 kubelet[2789]: E0123 18:50:18.883957 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.884119 kubelet[2789]: W0123 18:50:18.883971 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.884119 kubelet[2789]: E0123 18:50:18.883985 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.884551 kubelet[2789]: E0123 18:50:18.884514 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.884551 kubelet[2789]: W0123 18:50:18.884539 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.884551 kubelet[2789]: E0123 18:50:18.884553 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.885043 kubelet[2789]: E0123 18:50:18.885015 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.885043 kubelet[2789]: W0123 18:50:18.885037 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.885229 kubelet[2789]: E0123 18:50:18.885055 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.885653 kubelet[2789]: E0123 18:50:18.885615 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.885653 kubelet[2789]: W0123 18:50:18.885639 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.885776 kubelet[2789]: E0123 18:50:18.885657 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.887611 kubelet[2789]: E0123 18:50:18.886420 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.887611 kubelet[2789]: W0123 18:50:18.886445 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.887611 kubelet[2789]: E0123 18:50:18.886462 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.887611 kubelet[2789]: E0123 18:50:18.886896 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.887611 kubelet[2789]: W0123 18:50:18.886910 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.887611 kubelet[2789]: E0123 18:50:18.886925 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.887611 kubelet[2789]: E0123 18:50:18.887404 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.887611 kubelet[2789]: W0123 18:50:18.887418 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.887611 kubelet[2789]: E0123 18:50:18.887434 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:18.904213 kubelet[2789]: E0123 18:50:18.903293 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:18.904213 kubelet[2789]: W0123 18:50:18.903321 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:18.904213 kubelet[2789]: E0123 18:50:18.903345 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:20.139609 kubelet[2789]: E0123 18:50:20.139547 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:50:20.754963 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2879913509.mount: Deactivated successfully. Jan 23 18:50:21.977943 containerd[1630]: time="2026-01-23T18:50:21.977893733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:21.979038 containerd[1630]: time="2026-01-23T18:50:21.978884430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 18:50:21.979829 containerd[1630]: time="2026-01-23T18:50:21.979808639Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:21.981495 containerd[1630]: time="2026-01-23T18:50:21.981471296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:21.981829 containerd[1630]: time="2026-01-23T18:50:21.981805666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 3.202496206s" Jan 23 18:50:21.981876 containerd[1630]: time="2026-01-23T18:50:21.981867043Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 18:50:21.982717 containerd[1630]: time="2026-01-23T18:50:21.982539272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 18:50:21.993806 containerd[1630]: time="2026-01-23T18:50:21.993783513Z" level=info msg="CreateContainer within sandbox \"6789c9274481e1b821cbb0c3dc6b15244541ca01da0c805ac2e8a4570d5d473a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 18:50:22.003341 containerd[1630]: time="2026-01-23T18:50:22.003321385Z" level=info msg="Container f4734f4e10b0e7620702f54d8d8580e0f4171a29d5d712192e5e02db8d57d09e: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:22.010167 containerd[1630]: time="2026-01-23T18:50:22.010140141Z" level=info msg="CreateContainer within sandbox \"6789c9274481e1b821cbb0c3dc6b15244541ca01da0c805ac2e8a4570d5d473a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f4734f4e10b0e7620702f54d8d8580e0f4171a29d5d712192e5e02db8d57d09e\"" Jan 23 18:50:22.010725 containerd[1630]: time="2026-01-23T18:50:22.010694773Z" level=info msg="StartContainer for \"f4734f4e10b0e7620702f54d8d8580e0f4171a29d5d712192e5e02db8d57d09e\"" Jan 23 18:50:22.011722 containerd[1630]: time="2026-01-23T18:50:22.011429533Z" level=info msg="connecting to shim f4734f4e10b0e7620702f54d8d8580e0f4171a29d5d712192e5e02db8d57d09e" address="unix:///run/containerd/s/6b1e2862481d459464b50040386527fd86296aafce237723c615cad69bbd58dd" protocol=ttrpc version=3 Jan 23 18:50:22.030302 systemd[1]: Started cri-containerd-f4734f4e10b0e7620702f54d8d8580e0f4171a29d5d712192e5e02db8d57d09e.scope - libcontainer container f4734f4e10b0e7620702f54d8d8580e0f4171a29d5d712192e5e02db8d57d09e. Jan 23 18:50:22.072574 containerd[1630]: time="2026-01-23T18:50:22.072543953Z" level=info msg="StartContainer for \"f4734f4e10b0e7620702f54d8d8580e0f4171a29d5d712192e5e02db8d57d09e\" returns successfully" Jan 23 18:50:22.140406 kubelet[2789]: E0123 18:50:22.140347 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:50:22.290121 kubelet[2789]: E0123 18:50:22.290006 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.290121 kubelet[2789]: W0123 18:50:22.290027 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.290121 kubelet[2789]: E0123 18:50:22.290122 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.290604 kubelet[2789]: E0123 18:50:22.290587 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.290604 kubelet[2789]: W0123 18:50:22.290599 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.290604 kubelet[2789]: E0123 18:50:22.290606 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.291133 kubelet[2789]: E0123 18:50:22.291116 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.291133 kubelet[2789]: W0123 18:50:22.291128 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.291206 kubelet[2789]: E0123 18:50:22.291165 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.291487 kubelet[2789]: E0123 18:50:22.291463 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.291487 kubelet[2789]: W0123 18:50:22.291475 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.291487 kubelet[2789]: E0123 18:50:22.291482 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.291781 kubelet[2789]: E0123 18:50:22.291764 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.291781 kubelet[2789]: W0123 18:50:22.291776 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.291781 kubelet[2789]: E0123 18:50:22.291783 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.291981 kubelet[2789]: E0123 18:50:22.291966 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.291981 kubelet[2789]: W0123 18:50:22.291976 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.291981 kubelet[2789]: E0123 18:50:22.291982 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.292219 kubelet[2789]: E0123 18:50:22.292203 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.292219 kubelet[2789]: W0123 18:50:22.292214 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.292219 kubelet[2789]: E0123 18:50:22.292220 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.292467 kubelet[2789]: E0123 18:50:22.292438 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.292467 kubelet[2789]: W0123 18:50:22.292459 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.292467 kubelet[2789]: E0123 18:50:22.292465 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.292800 kubelet[2789]: E0123 18:50:22.292722 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.292800 kubelet[2789]: W0123 18:50:22.292731 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.292800 kubelet[2789]: E0123 18:50:22.292738 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.293022 kubelet[2789]: E0123 18:50:22.293005 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.293022 kubelet[2789]: W0123 18:50:22.293017 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.293022 kubelet[2789]: E0123 18:50:22.293024 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.293278 kubelet[2789]: E0123 18:50:22.293253 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.293278 kubelet[2789]: W0123 18:50:22.293263 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.293278 kubelet[2789]: E0123 18:50:22.293270 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.293463 kubelet[2789]: E0123 18:50:22.293449 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.293463 kubelet[2789]: W0123 18:50:22.293459 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.293463 kubelet[2789]: E0123 18:50:22.293465 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.293650 kubelet[2789]: E0123 18:50:22.293634 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.293650 kubelet[2789]: W0123 18:50:22.293645 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.293650 kubelet[2789]: E0123 18:50:22.293650 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.293968 kubelet[2789]: E0123 18:50:22.293836 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.293968 kubelet[2789]: W0123 18:50:22.293844 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.293968 kubelet[2789]: E0123 18:50:22.293850 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.294119 kubelet[2789]: E0123 18:50:22.294111 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.294150 kubelet[2789]: W0123 18:50:22.294140 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.294150 kubelet[2789]: E0123 18:50:22.294149 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.301998 kubelet[2789]: E0123 18:50:22.301976 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.301998 kubelet[2789]: W0123 18:50:22.301988 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.301998 kubelet[2789]: E0123 18:50:22.301997 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.302297 kubelet[2789]: E0123 18:50:22.302273 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.302297 kubelet[2789]: W0123 18:50:22.302291 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.302339 kubelet[2789]: E0123 18:50:22.302297 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.302545 kubelet[2789]: E0123 18:50:22.302527 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.302545 kubelet[2789]: W0123 18:50:22.302537 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.302545 kubelet[2789]: E0123 18:50:22.302543 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.302779 kubelet[2789]: E0123 18:50:22.302762 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.302779 kubelet[2789]: W0123 18:50:22.302773 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.302779 kubelet[2789]: E0123 18:50:22.302779 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.303015 kubelet[2789]: E0123 18:50:22.302998 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.303015 kubelet[2789]: W0123 18:50:22.303007 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.303015 kubelet[2789]: E0123 18:50:22.303013 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.303242 kubelet[2789]: E0123 18:50:22.303224 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.303242 kubelet[2789]: W0123 18:50:22.303234 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.303242 kubelet[2789]: E0123 18:50:22.303241 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.303631 kubelet[2789]: E0123 18:50:22.303612 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.303631 kubelet[2789]: W0123 18:50:22.303626 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.303631 kubelet[2789]: E0123 18:50:22.303633 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.304629 kubelet[2789]: E0123 18:50:22.304598 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.304629 kubelet[2789]: W0123 18:50:22.304612 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.304629 kubelet[2789]: E0123 18:50:22.304620 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.304833 kubelet[2789]: E0123 18:50:22.304810 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.304862 kubelet[2789]: W0123 18:50:22.304835 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.304862 kubelet[2789]: E0123 18:50:22.304842 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.305097 kubelet[2789]: E0123 18:50:22.305079 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.305097 kubelet[2789]: W0123 18:50:22.305091 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.305097 kubelet[2789]: E0123 18:50:22.305098 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.305411 kubelet[2789]: E0123 18:50:22.305393 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.305411 kubelet[2789]: W0123 18:50:22.305406 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.305411 kubelet[2789]: E0123 18:50:22.305412 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.305752 kubelet[2789]: E0123 18:50:22.305735 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.305780 kubelet[2789]: W0123 18:50:22.305767 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.305780 kubelet[2789]: E0123 18:50:22.305774 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.306039 kubelet[2789]: E0123 18:50:22.306021 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.306039 kubelet[2789]: W0123 18:50:22.306034 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.306172 kubelet[2789]: E0123 18:50:22.306041 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.306344 kubelet[2789]: E0123 18:50:22.306328 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.306344 kubelet[2789]: W0123 18:50:22.306340 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.306377 kubelet[2789]: E0123 18:50:22.306346 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.306563 kubelet[2789]: E0123 18:50:22.306548 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.306563 kubelet[2789]: W0123 18:50:22.306559 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.306563 kubelet[2789]: E0123 18:50:22.306565 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.306795 kubelet[2789]: E0123 18:50:22.306755 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.306795 kubelet[2789]: W0123 18:50:22.306764 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.306795 kubelet[2789]: E0123 18:50:22.306771 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.307059 kubelet[2789]: E0123 18:50:22.307030 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.307059 kubelet[2789]: W0123 18:50:22.307042 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.307059 kubelet[2789]: E0123 18:50:22.307048 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:22.307302 kubelet[2789]: E0123 18:50:22.307271 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:22.307302 kubelet[2789]: W0123 18:50:22.307290 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:22.307302 kubelet[2789]: E0123 18:50:22.307302 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.235388 kubelet[2789]: I0123 18:50:23.235312 2789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:50:23.299034 kubelet[2789]: E0123 18:50:23.298982 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.299034 kubelet[2789]: W0123 18:50:23.299014 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.299034 kubelet[2789]: E0123 18:50:23.299042 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.299563 kubelet[2789]: E0123 18:50:23.299503 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.299563 kubelet[2789]: W0123 18:50:23.299516 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.299563 kubelet[2789]: E0123 18:50:23.299532 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.300215 kubelet[2789]: E0123 18:50:23.299953 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.300215 kubelet[2789]: W0123 18:50:23.299974 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.300215 kubelet[2789]: E0123 18:50:23.299989 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.300423 kubelet[2789]: E0123 18:50:23.300346 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.300423 kubelet[2789]: W0123 18:50:23.300358 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.300423 kubelet[2789]: E0123 18:50:23.300372 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.300739 kubelet[2789]: E0123 18:50:23.300709 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.300739 kubelet[2789]: W0123 18:50:23.300727 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.300808 kubelet[2789]: E0123 18:50:23.300740 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.301066 kubelet[2789]: E0123 18:50:23.301037 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.301066 kubelet[2789]: W0123 18:50:23.301055 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.301282 kubelet[2789]: E0123 18:50:23.301066 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.301647 kubelet[2789]: E0123 18:50:23.301583 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.301647 kubelet[2789]: W0123 18:50:23.301629 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.301855 kubelet[2789]: E0123 18:50:23.301665 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.302084 kubelet[2789]: E0123 18:50:23.302046 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.302084 kubelet[2789]: W0123 18:50:23.302070 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.302084 kubelet[2789]: E0123 18:50:23.302086 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.302519 kubelet[2789]: E0123 18:50:23.302489 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.302519 kubelet[2789]: W0123 18:50:23.302509 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.302519 kubelet[2789]: E0123 18:50:23.302522 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.303054 kubelet[2789]: E0123 18:50:23.303025 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.303054 kubelet[2789]: W0123 18:50:23.303050 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.303137 kubelet[2789]: E0123 18:50:23.303064 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.303476 kubelet[2789]: E0123 18:50:23.303447 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.303476 kubelet[2789]: W0123 18:50:23.303465 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.303476 kubelet[2789]: E0123 18:50:23.303478 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.304004 kubelet[2789]: E0123 18:50:23.303820 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.304004 kubelet[2789]: W0123 18:50:23.303839 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.304004 kubelet[2789]: E0123 18:50:23.303853 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.304243 kubelet[2789]: E0123 18:50:23.304226 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.304364 kubelet[2789]: W0123 18:50:23.304303 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.304364 kubelet[2789]: E0123 18:50:23.304347 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.304805 kubelet[2789]: E0123 18:50:23.304691 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.304805 kubelet[2789]: W0123 18:50:23.304716 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.304805 kubelet[2789]: E0123 18:50:23.304732 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.305089 kubelet[2789]: E0123 18:50:23.305069 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.305089 kubelet[2789]: W0123 18:50:23.305086 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.305159 kubelet[2789]: E0123 18:50:23.305099 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.309929 kubelet[2789]: E0123 18:50:23.309886 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.309929 kubelet[2789]: W0123 18:50:23.309907 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.309929 kubelet[2789]: E0123 18:50:23.309921 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.310674 kubelet[2789]: E0123 18:50:23.310272 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.310674 kubelet[2789]: W0123 18:50:23.310283 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.310674 kubelet[2789]: E0123 18:50:23.310342 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.310938 kubelet[2789]: E0123 18:50:23.310899 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.310938 kubelet[2789]: W0123 18:50:23.310924 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.310938 kubelet[2789]: E0123 18:50:23.310940 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.311484 kubelet[2789]: E0123 18:50:23.311427 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.311484 kubelet[2789]: W0123 18:50:23.311450 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.311579 kubelet[2789]: E0123 18:50:23.311528 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.311981 kubelet[2789]: E0123 18:50:23.311950 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.311981 kubelet[2789]: W0123 18:50:23.311973 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.312149 kubelet[2789]: E0123 18:50:23.311988 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.312444 kubelet[2789]: E0123 18:50:23.312394 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.312444 kubelet[2789]: W0123 18:50:23.312435 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.312537 kubelet[2789]: E0123 18:50:23.312450 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.312912 kubelet[2789]: E0123 18:50:23.312882 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.312912 kubelet[2789]: W0123 18:50:23.312902 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.313000 kubelet[2789]: E0123 18:50:23.312916 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.313416 kubelet[2789]: E0123 18:50:23.313372 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.313416 kubelet[2789]: W0123 18:50:23.313393 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.313498 kubelet[2789]: E0123 18:50:23.313428 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.314074 kubelet[2789]: E0123 18:50:23.314041 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.314074 kubelet[2789]: W0123 18:50:23.314064 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.314158 kubelet[2789]: E0123 18:50:23.314078 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.314558 kubelet[2789]: E0123 18:50:23.314527 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.314558 kubelet[2789]: W0123 18:50:23.314548 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.314640 kubelet[2789]: E0123 18:50:23.314563 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.314925 kubelet[2789]: E0123 18:50:23.314895 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.314925 kubelet[2789]: W0123 18:50:23.314915 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.315006 kubelet[2789]: E0123 18:50:23.314929 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.315431 kubelet[2789]: E0123 18:50:23.315392 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.315431 kubelet[2789]: W0123 18:50:23.315422 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.315516 kubelet[2789]: E0123 18:50:23.315437 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.315862 kubelet[2789]: E0123 18:50:23.315831 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.315862 kubelet[2789]: W0123 18:50:23.315852 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.315940 kubelet[2789]: E0123 18:50:23.315865 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.316337 kubelet[2789]: E0123 18:50:23.316305 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.316337 kubelet[2789]: W0123 18:50:23.316326 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.316434 kubelet[2789]: E0123 18:50:23.316340 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.316919 kubelet[2789]: E0123 18:50:23.316888 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.316919 kubelet[2789]: W0123 18:50:23.316909 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.317001 kubelet[2789]: E0123 18:50:23.316922 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.317388 kubelet[2789]: E0123 18:50:23.317334 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.317388 kubelet[2789]: W0123 18:50:23.317355 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.317388 kubelet[2789]: E0123 18:50:23.317369 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.318243 kubelet[2789]: E0123 18:50:23.318118 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.318243 kubelet[2789]: W0123 18:50:23.318143 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.318243 kubelet[2789]: E0123 18:50:23.318160 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.320213 kubelet[2789]: E0123 18:50:23.319753 2789 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:50:23.320213 kubelet[2789]: W0123 18:50:23.319778 2789 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:50:23.320213 kubelet[2789]: E0123 18:50:23.319803 2789 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:50:23.986467 containerd[1630]: time="2026-01-23T18:50:23.986344591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:23.987840 containerd[1630]: time="2026-01-23T18:50:23.987798613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 18:50:23.989171 containerd[1630]: time="2026-01-23T18:50:23.989106049Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:23.994259 containerd[1630]: time="2026-01-23T18:50:23.994004828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:23.997165 containerd[1630]: time="2026-01-23T18:50:23.997107931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 2.014524443s" Jan 23 18:50:23.997290 containerd[1630]: time="2026-01-23T18:50:23.997161536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 18:50:24.004065 containerd[1630]: time="2026-01-23T18:50:24.003942174Z" level=info msg="CreateContainer within sandbox \"1e15e45a76b86c476c5e01729e12506e985b4f83ae728bdae179274720d3f3aa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 18:50:24.016720 containerd[1630]: time="2026-01-23T18:50:24.016554676Z" level=info msg="Container 3d2926f5246becbe01cb4858f9b1c8c6335a7d3ca3ec28013d48503591f2100e: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:24.031781 containerd[1630]: time="2026-01-23T18:50:24.031682072Z" level=info msg="CreateContainer within sandbox \"1e15e45a76b86c476c5e01729e12506e985b4f83ae728bdae179274720d3f3aa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3d2926f5246becbe01cb4858f9b1c8c6335a7d3ca3ec28013d48503591f2100e\"" Jan 23 18:50:24.033056 containerd[1630]: time="2026-01-23T18:50:24.033005991Z" level=info msg="StartContainer for \"3d2926f5246becbe01cb4858f9b1c8c6335a7d3ca3ec28013d48503591f2100e\"" Jan 23 18:50:24.036350 containerd[1630]: time="2026-01-23T18:50:24.036260659Z" level=info msg="connecting to shim 3d2926f5246becbe01cb4858f9b1c8c6335a7d3ca3ec28013d48503591f2100e" address="unix:///run/containerd/s/93aef2cfb2525551fdf221b35d05ec91ee6ba6af2f5c2c7a73f94891705c8b69" protocol=ttrpc version=3 Jan 23 18:50:24.074415 systemd[1]: Started cri-containerd-3d2926f5246becbe01cb4858f9b1c8c6335a7d3ca3ec28013d48503591f2100e.scope - libcontainer container 3d2926f5246becbe01cb4858f9b1c8c6335a7d3ca3ec28013d48503591f2100e. Jan 23 18:50:24.140241 kubelet[2789]: E0123 18:50:24.139701 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:50:24.192751 containerd[1630]: time="2026-01-23T18:50:24.192683014Z" level=info msg="StartContainer for \"3d2926f5246becbe01cb4858f9b1c8c6335a7d3ca3ec28013d48503591f2100e\" returns successfully" Jan 23 18:50:24.224315 systemd[1]: cri-containerd-3d2926f5246becbe01cb4858f9b1c8c6335a7d3ca3ec28013d48503591f2100e.scope: Deactivated successfully. Jan 23 18:50:24.230165 containerd[1630]: time="2026-01-23T18:50:24.229171946Z" level=info msg="received container exit event container_id:\"3d2926f5246becbe01cb4858f9b1c8c6335a7d3ca3ec28013d48503591f2100e\" id:\"3d2926f5246becbe01cb4858f9b1c8c6335a7d3ca3ec28013d48503591f2100e\" pid:3520 exited_at:{seconds:1769194224 nanos:228629963}" Jan 23 18:50:24.264964 kubelet[2789]: I0123 18:50:24.264339 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-864c5b9cd5-dd4rw" podStartSLOduration=3.060783489 podStartE2EDuration="6.264280102s" podCreationTimestamp="2026-01-23 18:50:18 +0000 UTC" firstStartedPulling="2026-01-23 18:50:18.778934456 +0000 UTC m=+19.714878625" lastFinishedPulling="2026-01-23 18:50:21.982431079 +0000 UTC m=+22.918375238" observedRunningTime="2026-01-23 18:50:22.248546797 +0000 UTC m=+23.184490976" watchObservedRunningTime="2026-01-23 18:50:24.264280102 +0000 UTC m=+25.200224291" Jan 23 18:50:24.279763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3d2926f5246becbe01cb4858f9b1c8c6335a7d3ca3ec28013d48503591f2100e-rootfs.mount: Deactivated successfully. Jan 23 18:50:25.249230 containerd[1630]: time="2026-01-23T18:50:25.249023241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 18:50:26.139689 kubelet[2789]: E0123 18:50:26.139605 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:50:27.596302 kubelet[2789]: I0123 18:50:27.595446 2789 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:50:28.140043 kubelet[2789]: E0123 18:50:28.139655 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:50:29.554043 containerd[1630]: time="2026-01-23T18:50:29.553995207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:29.556616 containerd[1630]: time="2026-01-23T18:50:29.556473743Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 18:50:29.557399 containerd[1630]: time="2026-01-23T18:50:29.557377687Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:29.560435 containerd[1630]: time="2026-01-23T18:50:29.560413753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:29.560837 containerd[1630]: time="2026-01-23T18:50:29.560820111Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.311696972s" Jan 23 18:50:29.560889 containerd[1630]: time="2026-01-23T18:50:29.560879036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 18:50:29.565989 containerd[1630]: time="2026-01-23T18:50:29.565961906Z" level=info msg="CreateContainer within sandbox \"1e15e45a76b86c476c5e01729e12506e985b4f83ae728bdae179274720d3f3aa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:50:29.578088 containerd[1630]: time="2026-01-23T18:50:29.577386226Z" level=info msg="Container fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:29.579674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329368098.mount: Deactivated successfully. Jan 23 18:50:29.587538 containerd[1630]: time="2026-01-23T18:50:29.587511084Z" level=info msg="CreateContainer within sandbox \"1e15e45a76b86c476c5e01729e12506e985b4f83ae728bdae179274720d3f3aa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6\"" Jan 23 18:50:29.587954 containerd[1630]: time="2026-01-23T18:50:29.587909223Z" level=info msg="StartContainer for \"fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6\"" Jan 23 18:50:29.589600 containerd[1630]: time="2026-01-23T18:50:29.589569140Z" level=info msg="connecting to shim fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6" address="unix:///run/containerd/s/93aef2cfb2525551fdf221b35d05ec91ee6ba6af2f5c2c7a73f94891705c8b69" protocol=ttrpc version=3 Jan 23 18:50:29.608305 systemd[1]: Started cri-containerd-fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6.scope - libcontainer container fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6. Jan 23 18:50:29.691824 containerd[1630]: time="2026-01-23T18:50:29.691783618Z" level=info msg="StartContainer for \"fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6\" returns successfully" Jan 23 18:50:30.140912 kubelet[2789]: E0123 18:50:30.140321 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:50:30.297422 containerd[1630]: time="2026-01-23T18:50:30.297347797Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:50:30.307595 systemd[1]: cri-containerd-fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6.scope: Deactivated successfully. Jan 23 18:50:30.308318 systemd[1]: cri-containerd-fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6.scope: Consumed 585ms CPU time, 207.4M memory peak, 171.3M written to disk. Jan 23 18:50:30.311284 containerd[1630]: time="2026-01-23T18:50:30.311244252Z" level=info msg="received container exit event container_id:\"fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6\" id:\"fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6\" pid:3580 exited_at:{seconds:1769194230 nanos:310460630}" Jan 23 18:50:30.354782 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc40c66e1ece60bf313621c2c311731dda047994ac043fb62c56c45eef8a9ae6-rootfs.mount: Deactivated successfully. Jan 23 18:50:30.380328 kubelet[2789]: I0123 18:50:30.380264 2789 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 18:50:30.451831 systemd[1]: Created slice kubepods-besteffort-podc91754db_1ae1_406f_a4ae_966042e218eb.slice - libcontainer container kubepods-besteffort-podc91754db_1ae1_406f_a4ae_966042e218eb.slice. Jan 23 18:50:30.453663 kubelet[2789]: E0123 18:50:30.452896 2789 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4459-2-3-1-de7581f71a\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-2-3-1-de7581f71a' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-backend-key-pair\"" type="*v1.Secret" Jan 23 18:50:30.453663 kubelet[2789]: E0123 18:50:30.452933 2789 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:ci-4459-2-3-1-de7581f71a\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4459-2-3-1-de7581f71a' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"whisker-ca-bundle\"" type="*v1.ConfigMap" Jan 23 18:50:30.463370 kubelet[2789]: I0123 18:50:30.463338 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10031b0c-bbc2-4f8e-9df2-1b6971eda033-config\") pod \"goldmane-666569f655-9pqgl\" (UID: \"10031b0c-bbc2-4f8e-9df2-1b6971eda033\") " pod="calico-system/goldmane-666569f655-9pqgl" Jan 23 18:50:30.463370 kubelet[2789]: I0123 18:50:30.463363 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c91754db-1ae1-406f-a4ae-966042e218eb-tigera-ca-bundle\") pod \"calico-kube-controllers-68dbfc5dfc-dffpg\" (UID: \"c91754db-1ae1-406f-a4ae-966042e218eb\") " pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" Jan 23 18:50:30.463468 kubelet[2789]: I0123 18:50:30.463375 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d16a8768-4465-40dd-b156-fd821a42a741-config-volume\") pod \"coredns-674b8bbfcf-vr9zx\" (UID: \"d16a8768-4465-40dd-b156-fd821a42a741\") " pod="kube-system/coredns-674b8bbfcf-vr9zx" Jan 23 18:50:30.463468 kubelet[2789]: I0123 18:50:30.463387 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzgkz\" (UniqueName: \"kubernetes.io/projected/d16a8768-4465-40dd-b156-fd821a42a741-kube-api-access-vzgkz\") pod \"coredns-674b8bbfcf-vr9zx\" (UID: \"d16a8768-4465-40dd-b156-fd821a42a741\") " pod="kube-system/coredns-674b8bbfcf-vr9zx" Jan 23 18:50:30.463468 kubelet[2789]: I0123 18:50:30.463398 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-whisker-backend-key-pair\") pod \"whisker-6fd8694b95-qrwpz\" (UID: \"f73da5cd-5bde-4b06-9ed2-f359f7ef8813\") " pod="calico-system/whisker-6fd8694b95-qrwpz" Jan 23 18:50:30.463468 kubelet[2789]: I0123 18:50:30.463408 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-727b2\" (UniqueName: \"kubernetes.io/projected/5859e809-2c43-4cb8-bea4-bfd09f8d4f24-kube-api-access-727b2\") pod \"coredns-674b8bbfcf-s24fq\" (UID: \"5859e809-2c43-4cb8-bea4-bfd09f8d4f24\") " pod="kube-system/coredns-674b8bbfcf-s24fq" Jan 23 18:50:30.463468 kubelet[2789]: I0123 18:50:30.463427 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68gkg\" (UniqueName: \"kubernetes.io/projected/18c41309-8840-4c5c-a0e6-d6fb41f37c90-kube-api-access-68gkg\") pod \"calico-apiserver-76b4596f9f-n7xpc\" (UID: \"18c41309-8840-4c5c-a0e6-d6fb41f37c90\") " pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" Jan 23 18:50:30.463570 kubelet[2789]: I0123 18:50:30.463438 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z2s4\" (UniqueName: \"kubernetes.io/projected/10031b0c-bbc2-4f8e-9df2-1b6971eda033-kube-api-access-8z2s4\") pod \"goldmane-666569f655-9pqgl\" (UID: \"10031b0c-bbc2-4f8e-9df2-1b6971eda033\") " pod="calico-system/goldmane-666569f655-9pqgl" Jan 23 18:50:30.463570 kubelet[2789]: I0123 18:50:30.463448 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-whisker-ca-bundle\") pod \"whisker-6fd8694b95-qrwpz\" (UID: \"f73da5cd-5bde-4b06-9ed2-f359f7ef8813\") " pod="calico-system/whisker-6fd8694b95-qrwpz" Jan 23 18:50:30.463570 kubelet[2789]: I0123 18:50:30.463458 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7596438c-5371-4696-b02d-1c3d820234e2-calico-apiserver-certs\") pod \"calico-apiserver-b7f687658-l86zs\" (UID: \"7596438c-5371-4696-b02d-1c3d820234e2\") " pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" Jan 23 18:50:30.463570 kubelet[2789]: I0123 18:50:30.463469 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gx6dt\" (UniqueName: \"kubernetes.io/projected/7596438c-5371-4696-b02d-1c3d820234e2-kube-api-access-gx6dt\") pod \"calico-apiserver-b7f687658-l86zs\" (UID: \"7596438c-5371-4696-b02d-1c3d820234e2\") " pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" Jan 23 18:50:30.463570 kubelet[2789]: I0123 18:50:30.463480 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa-calico-apiserver-certs\") pod \"calico-apiserver-b7f687658-b9829\" (UID: \"6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa\") " pod="calico-apiserver/calico-apiserver-b7f687658-b9829" Jan 23 18:50:30.463656 kubelet[2789]: I0123 18:50:30.463490 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10031b0c-bbc2-4f8e-9df2-1b6971eda033-goldmane-ca-bundle\") pod \"goldmane-666569f655-9pqgl\" (UID: \"10031b0c-bbc2-4f8e-9df2-1b6971eda033\") " pod="calico-system/goldmane-666569f655-9pqgl" Jan 23 18:50:30.463656 kubelet[2789]: I0123 18:50:30.463500 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtm9c\" (UniqueName: \"kubernetes.io/projected/c91754db-1ae1-406f-a4ae-966042e218eb-kube-api-access-xtm9c\") pod \"calico-kube-controllers-68dbfc5dfc-dffpg\" (UID: \"c91754db-1ae1-406f-a4ae-966042e218eb\") " pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" Jan 23 18:50:30.463656 kubelet[2789]: I0123 18:50:30.463514 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/18c41309-8840-4c5c-a0e6-d6fb41f37c90-calico-apiserver-certs\") pod \"calico-apiserver-76b4596f9f-n7xpc\" (UID: \"18c41309-8840-4c5c-a0e6-d6fb41f37c90\") " pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" Jan 23 18:50:30.463656 kubelet[2789]: I0123 18:50:30.463526 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfvpx\" (UniqueName: \"kubernetes.io/projected/6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa-kube-api-access-zfvpx\") pod \"calico-apiserver-b7f687658-b9829\" (UID: \"6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa\") " pod="calico-apiserver/calico-apiserver-b7f687658-b9829" Jan 23 18:50:30.463656 kubelet[2789]: I0123 18:50:30.463538 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nlb6\" (UniqueName: \"kubernetes.io/projected/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-kube-api-access-5nlb6\") pod \"whisker-6fd8694b95-qrwpz\" (UID: \"f73da5cd-5bde-4b06-9ed2-f359f7ef8813\") " pod="calico-system/whisker-6fd8694b95-qrwpz" Jan 23 18:50:30.463744 kubelet[2789]: I0123 18:50:30.463552 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/10031b0c-bbc2-4f8e-9df2-1b6971eda033-goldmane-key-pair\") pod \"goldmane-666569f655-9pqgl\" (UID: \"10031b0c-bbc2-4f8e-9df2-1b6971eda033\") " pod="calico-system/goldmane-666569f655-9pqgl" Jan 23 18:50:30.463744 kubelet[2789]: I0123 18:50:30.463562 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5859e809-2c43-4cb8-bea4-bfd09f8d4f24-config-volume\") pod \"coredns-674b8bbfcf-s24fq\" (UID: \"5859e809-2c43-4cb8-bea4-bfd09f8d4f24\") " pod="kube-system/coredns-674b8bbfcf-s24fq" Jan 23 18:50:30.468954 systemd[1]: Created slice kubepods-besteffort-pod7596438c_5371_4696_b02d_1c3d820234e2.slice - libcontainer container kubepods-besteffort-pod7596438c_5371_4696_b02d_1c3d820234e2.slice. Jan 23 18:50:30.481946 systemd[1]: Created slice kubepods-burstable-podd16a8768_4465_40dd_b156_fd821a42a741.slice - libcontainer container kubepods-burstable-podd16a8768_4465_40dd_b156_fd821a42a741.slice. Jan 23 18:50:30.489107 systemd[1]: Created slice kubepods-burstable-pod5859e809_2c43_4cb8_bea4_bfd09f8d4f24.slice - libcontainer container kubepods-burstable-pod5859e809_2c43_4cb8_bea4_bfd09f8d4f24.slice. Jan 23 18:50:30.496835 systemd[1]: Created slice kubepods-besteffort-podf73da5cd_5bde_4b06_9ed2_f359f7ef8813.slice - libcontainer container kubepods-besteffort-podf73da5cd_5bde_4b06_9ed2_f359f7ef8813.slice. Jan 23 18:50:30.501122 systemd[1]: Created slice kubepods-besteffort-pod6cd6b32a_ecb3_4f1c_b9d5_5bdeab914efa.slice - libcontainer container kubepods-besteffort-pod6cd6b32a_ecb3_4f1c_b9d5_5bdeab914efa.slice. Jan 23 18:50:30.507999 systemd[1]: Created slice kubepods-besteffort-pod18c41309_8840_4c5c_a0e6_d6fb41f37c90.slice - libcontainer container kubepods-besteffort-pod18c41309_8840_4c5c_a0e6_d6fb41f37c90.slice. Jan 23 18:50:30.513080 systemd[1]: Created slice kubepods-besteffort-pod10031b0c_bbc2_4f8e_9df2_1b6971eda033.slice - libcontainer container kubepods-besteffort-pod10031b0c_bbc2_4f8e_9df2_1b6971eda033.slice. Jan 23 18:50:30.763729 containerd[1630]: time="2026-01-23T18:50:30.763488651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68dbfc5dfc-dffpg,Uid:c91754db-1ae1-406f-a4ae-966042e218eb,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:30.781675 containerd[1630]: time="2026-01-23T18:50:30.781283195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7f687658-l86zs,Uid:7596438c-5371-4696-b02d-1c3d820234e2,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:50:30.787929 containerd[1630]: time="2026-01-23T18:50:30.787892464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vr9zx,Uid:d16a8768-4465-40dd-b156-fd821a42a741,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:30.793041 containerd[1630]: time="2026-01-23T18:50:30.793005504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s24fq,Uid:5859e809-2c43-4cb8-bea4-bfd09f8d4f24,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:30.805492 containerd[1630]: time="2026-01-23T18:50:30.805434651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7f687658-b9829,Uid:6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:50:30.813392 containerd[1630]: time="2026-01-23T18:50:30.813347507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76b4596f9f-n7xpc,Uid:18c41309-8840-4c5c-a0e6-d6fb41f37c90,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:50:30.815950 containerd[1630]: time="2026-01-23T18:50:30.815907227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9pqgl,Uid:10031b0c-bbc2-4f8e-9df2-1b6971eda033,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:30.955282 containerd[1630]: time="2026-01-23T18:50:30.955232478Z" level=error msg="Failed to destroy network for sandbox \"550fe31ebec2b590835ba0a2e1756f9da314b19c766688a74b9837e9290e728b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.957737 containerd[1630]: time="2026-01-23T18:50:30.957668620Z" level=error msg="Failed to destroy network for sandbox \"b55d83f271ce09ed5751268ea71e00db02138ed1e28ed13344e8121a7494cc10\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.958160 containerd[1630]: time="2026-01-23T18:50:30.958067636Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7f687658-b9829,Uid:6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"550fe31ebec2b590835ba0a2e1756f9da314b19c766688a74b9837e9290e728b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.958649 kubelet[2789]: E0123 18:50:30.958570 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"550fe31ebec2b590835ba0a2e1756f9da314b19c766688a74b9837e9290e728b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.958649 kubelet[2789]: E0123 18:50:30.958640 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"550fe31ebec2b590835ba0a2e1756f9da314b19c766688a74b9837e9290e728b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" Jan 23 18:50:30.958723 kubelet[2789]: E0123 18:50:30.958658 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"550fe31ebec2b590835ba0a2e1756f9da314b19c766688a74b9837e9290e728b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" Jan 23 18:50:30.958723 kubelet[2789]: E0123 18:50:30.958697 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b7f687658-b9829_calico-apiserver(6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b7f687658-b9829_calico-apiserver(6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"550fe31ebec2b590835ba0a2e1756f9da314b19c766688a74b9837e9290e728b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:50:30.959864 containerd[1630]: time="2026-01-23T18:50:30.959725866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vr9zx,Uid:d16a8768-4465-40dd-b156-fd821a42a741,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55d83f271ce09ed5751268ea71e00db02138ed1e28ed13344e8121a7494cc10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.960409 kubelet[2789]: E0123 18:50:30.960295 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55d83f271ce09ed5751268ea71e00db02138ed1e28ed13344e8121a7494cc10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.960625 kubelet[2789]: E0123 18:50:30.960482 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55d83f271ce09ed5751268ea71e00db02138ed1e28ed13344e8121a7494cc10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vr9zx" Jan 23 18:50:30.960625 kubelet[2789]: E0123 18:50:30.960499 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b55d83f271ce09ed5751268ea71e00db02138ed1e28ed13344e8121a7494cc10\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vr9zx" Jan 23 18:50:30.961147 kubelet[2789]: E0123 18:50:30.960525 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vr9zx_kube-system(d16a8768-4465-40dd-b156-fd821a42a741)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vr9zx_kube-system(d16a8768-4465-40dd-b156-fd821a42a741)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b55d83f271ce09ed5751268ea71e00db02138ed1e28ed13344e8121a7494cc10\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vr9zx" podUID="d16a8768-4465-40dd-b156-fd821a42a741" Jan 23 18:50:30.969593 containerd[1630]: time="2026-01-23T18:50:30.969565870Z" level=error msg="Failed to destroy network for sandbox \"2413540a04b8164e3ede12595cecbb3c0ac8ab370fd46788aead29998fb39b6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.970020 containerd[1630]: time="2026-01-23T18:50:30.969965158Z" level=error msg="Failed to destroy network for sandbox \"60a247aee2a6eb7d998e6f67e60648bca1bc32dfd6c825cdd74a7ae029ae0830\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.971296 containerd[1630]: time="2026-01-23T18:50:30.971260464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s24fq,Uid:5859e809-2c43-4cb8-bea4-bfd09f8d4f24,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60a247aee2a6eb7d998e6f67e60648bca1bc32dfd6c825cdd74a7ae029ae0830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.971887 kubelet[2789]: E0123 18:50:30.971567 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60a247aee2a6eb7d998e6f67e60648bca1bc32dfd6c825cdd74a7ae029ae0830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.971887 kubelet[2789]: E0123 18:50:30.971602 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60a247aee2a6eb7d998e6f67e60648bca1bc32dfd6c825cdd74a7ae029ae0830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s24fq" Jan 23 18:50:30.971887 kubelet[2789]: E0123 18:50:30.971617 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60a247aee2a6eb7d998e6f67e60648bca1bc32dfd6c825cdd74a7ae029ae0830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-s24fq" Jan 23 18:50:30.971963 kubelet[2789]: E0123 18:50:30.971651 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-s24fq_kube-system(5859e809-2c43-4cb8-bea4-bfd09f8d4f24)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-s24fq_kube-system(5859e809-2c43-4cb8-bea4-bfd09f8d4f24)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60a247aee2a6eb7d998e6f67e60648bca1bc32dfd6c825cdd74a7ae029ae0830\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-s24fq" podUID="5859e809-2c43-4cb8-bea4-bfd09f8d4f24" Jan 23 18:50:30.973364 containerd[1630]: time="2026-01-23T18:50:30.973301880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68dbfc5dfc-dffpg,Uid:c91754db-1ae1-406f-a4ae-966042e218eb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2413540a04b8164e3ede12595cecbb3c0ac8ab370fd46788aead29998fb39b6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.973961 kubelet[2789]: E0123 18:50:30.973614 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2413540a04b8164e3ede12595cecbb3c0ac8ab370fd46788aead29998fb39b6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.974140 kubelet[2789]: E0123 18:50:30.974042 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2413540a04b8164e3ede12595cecbb3c0ac8ab370fd46788aead29998fb39b6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" Jan 23 18:50:30.974140 kubelet[2789]: E0123 18:50:30.974059 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2413540a04b8164e3ede12595cecbb3c0ac8ab370fd46788aead29998fb39b6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" Jan 23 18:50:30.974140 kubelet[2789]: E0123 18:50:30.974106 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-68dbfc5dfc-dffpg_calico-system(c91754db-1ae1-406f-a4ae-966042e218eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-68dbfc5dfc-dffpg_calico-system(c91754db-1ae1-406f-a4ae-966042e218eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2413540a04b8164e3ede12595cecbb3c0ac8ab370fd46788aead29998fb39b6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:50:30.981724 containerd[1630]: time="2026-01-23T18:50:30.981700109Z" level=error msg="Failed to destroy network for sandbox \"2165a4b3e2d25399d2c31c8d0134d92e1f5eb70f17a6437db48ff808a0e679ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.983123 containerd[1630]: time="2026-01-23T18:50:30.983103412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9pqgl,Uid:10031b0c-bbc2-4f8e-9df2-1b6971eda033,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2165a4b3e2d25399d2c31c8d0134d92e1f5eb70f17a6437db48ff808a0e679ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.983378 kubelet[2789]: E0123 18:50:30.983352 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2165a4b3e2d25399d2c31c8d0134d92e1f5eb70f17a6437db48ff808a0e679ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.983428 kubelet[2789]: E0123 18:50:30.983389 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2165a4b3e2d25399d2c31c8d0134d92e1f5eb70f17a6437db48ff808a0e679ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9pqgl" Jan 23 18:50:30.983428 kubelet[2789]: E0123 18:50:30.983404 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2165a4b3e2d25399d2c31c8d0134d92e1f5eb70f17a6437db48ff808a0e679ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-9pqgl" Jan 23 18:50:30.983471 kubelet[2789]: E0123 18:50:30.983442 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-9pqgl_calico-system(10031b0c-bbc2-4f8e-9df2-1b6971eda033)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-9pqgl_calico-system(10031b0c-bbc2-4f8e-9df2-1b6971eda033)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2165a4b3e2d25399d2c31c8d0134d92e1f5eb70f17a6437db48ff808a0e679ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:50:30.985753 containerd[1630]: time="2026-01-23T18:50:30.985722656Z" level=error msg="Failed to destroy network for sandbox \"89d36a52501072ca11ca306fad97bec5c5739443b8d625b51f1d9dfa7e8ad5b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.986662 containerd[1630]: time="2026-01-23T18:50:30.986645297Z" level=error msg="Failed to destroy network for sandbox \"8643b97e2083e1bb004c46086551fa151ec632fa3baccfb0374b7afc5ef61f9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.986934 containerd[1630]: time="2026-01-23T18:50:30.986909415Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76b4596f9f-n7xpc,Uid:18c41309-8840-4c5c-a0e6-d6fb41f37c90,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89d36a52501072ca11ca306fad97bec5c5739443b8d625b51f1d9dfa7e8ad5b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.987090 kubelet[2789]: E0123 18:50:30.987066 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89d36a52501072ca11ca306fad97bec5c5739443b8d625b51f1d9dfa7e8ad5b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.987207 kubelet[2789]: E0123 18:50:30.987153 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89d36a52501072ca11ca306fad97bec5c5739443b8d625b51f1d9dfa7e8ad5b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" Jan 23 18:50:30.987207 kubelet[2789]: E0123 18:50:30.987170 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89d36a52501072ca11ca306fad97bec5c5739443b8d625b51f1d9dfa7e8ad5b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" Jan 23 18:50:30.987461 kubelet[2789]: E0123 18:50:30.987229 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76b4596f9f-n7xpc_calico-apiserver(18c41309-8840-4c5c-a0e6-d6fb41f37c90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76b4596f9f-n7xpc_calico-apiserver(18c41309-8840-4c5c-a0e6-d6fb41f37c90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89d36a52501072ca11ca306fad97bec5c5739443b8d625b51f1d9dfa7e8ad5b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:50:30.987975 containerd[1630]: time="2026-01-23T18:50:30.987922002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7f687658-l86zs,Uid:7596438c-5371-4696-b02d-1c3d820234e2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8643b97e2083e1bb004c46086551fa151ec632fa3baccfb0374b7afc5ef61f9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.988102 kubelet[2789]: E0123 18:50:30.988070 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8643b97e2083e1bb004c46086551fa151ec632fa3baccfb0374b7afc5ef61f9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:30.988127 kubelet[2789]: E0123 18:50:30.988118 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8643b97e2083e1bb004c46086551fa151ec632fa3baccfb0374b7afc5ef61f9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" Jan 23 18:50:30.988156 kubelet[2789]: E0123 18:50:30.988127 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8643b97e2083e1bb004c46086551fa151ec632fa3baccfb0374b7afc5ef61f9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" Jan 23 18:50:30.988202 kubelet[2789]: E0123 18:50:30.988161 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-b7f687658-l86zs_calico-apiserver(7596438c-5371-4696-b02d-1c3d820234e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-b7f687658-l86zs_calico-apiserver(7596438c-5371-4696-b02d-1c3d820234e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8643b97e2083e1bb004c46086551fa151ec632fa3baccfb0374b7afc5ef61f9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:50:31.281945 containerd[1630]: time="2026-01-23T18:50:31.281863002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 18:50:31.569734 kubelet[2789]: E0123 18:50:31.568760 2789 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 23 18:50:31.569734 kubelet[2789]: E0123 18:50:31.568908 2789 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-whisker-ca-bundle podName:f73da5cd-5bde-4b06-9ed2-f359f7ef8813 nodeName:}" failed. No retries permitted until 2026-01-23 18:50:32.068883029 +0000 UTC m=+33.004827228 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-whisker-ca-bundle") pod "whisker-6fd8694b95-qrwpz" (UID: "f73da5cd-5bde-4b06-9ed2-f359f7ef8813") : failed to sync configmap cache: timed out waiting for the condition Jan 23 18:50:32.149814 systemd[1]: Created slice kubepods-besteffort-pod2a5fa6c0_649d_4612_b31e_23030250d313.slice - libcontainer container kubepods-besteffort-pod2a5fa6c0_649d_4612_b31e_23030250d313.slice. Jan 23 18:50:32.154378 containerd[1630]: time="2026-01-23T18:50:32.154270349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl4mg,Uid:2a5fa6c0-649d-4612-b31e-23030250d313,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:32.245019 containerd[1630]: time="2026-01-23T18:50:32.244941931Z" level=error msg="Failed to destroy network for sandbox \"cd4af1adb0a20324fa87f76ae75a7cf6ccf15189182689ab4a88950e69d54413\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:32.248861 containerd[1630]: time="2026-01-23T18:50:32.248810027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl4mg,Uid:2a5fa6c0-649d-4612-b31e-23030250d313,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd4af1adb0a20324fa87f76ae75a7cf6ccf15189182689ab4a88950e69d54413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:32.249504 kubelet[2789]: E0123 18:50:32.249454 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd4af1adb0a20324fa87f76ae75a7cf6ccf15189182689ab4a88950e69d54413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:32.249668 kubelet[2789]: E0123 18:50:32.249529 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd4af1adb0a20324fa87f76ae75a7cf6ccf15189182689ab4a88950e69d54413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl4mg" Jan 23 18:50:32.249668 kubelet[2789]: E0123 18:50:32.249564 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd4af1adb0a20324fa87f76ae75a7cf6ccf15189182689ab4a88950e69d54413\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hl4mg" Jan 23 18:50:32.249668 kubelet[2789]: E0123 18:50:32.249643 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hl4mg_calico-system(2a5fa6c0-649d-4612-b31e-23030250d313)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hl4mg_calico-system(2a5fa6c0-649d-4612-b31e-23030250d313)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd4af1adb0a20324fa87f76ae75a7cf6ccf15189182689ab4a88950e69d54413\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:50:32.252105 systemd[1]: run-netns-cni\x2d5ba44491\x2df589\x2da1e7\x2d5d64\x2df34bb7889d15.mount: Deactivated successfully. Jan 23 18:50:32.301743 containerd[1630]: time="2026-01-23T18:50:32.301660184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd8694b95-qrwpz,Uid:f73da5cd-5bde-4b06-9ed2-f359f7ef8813,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:32.395636 containerd[1630]: time="2026-01-23T18:50:32.395546452Z" level=error msg="Failed to destroy network for sandbox \"9da158280c40673346527ad42dbf6dc7698bc56755548e995bc0284041415d20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:32.399371 containerd[1630]: time="2026-01-23T18:50:32.399308313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6fd8694b95-qrwpz,Uid:f73da5cd-5bde-4b06-9ed2-f359f7ef8813,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9da158280c40673346527ad42dbf6dc7698bc56755548e995bc0284041415d20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:32.402220 kubelet[2789]: E0123 18:50:32.399652 2789 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9da158280c40673346527ad42dbf6dc7698bc56755548e995bc0284041415d20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:50:32.402220 kubelet[2789]: E0123 18:50:32.399723 2789 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9da158280c40673346527ad42dbf6dc7698bc56755548e995bc0284041415d20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fd8694b95-qrwpz" Jan 23 18:50:32.402220 kubelet[2789]: E0123 18:50:32.399767 2789 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9da158280c40673346527ad42dbf6dc7698bc56755548e995bc0284041415d20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6fd8694b95-qrwpz" Jan 23 18:50:32.402429 kubelet[2789]: E0123 18:50:32.399836 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6fd8694b95-qrwpz_calico-system(f73da5cd-5bde-4b06-9ed2-f359f7ef8813)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6fd8694b95-qrwpz_calico-system(f73da5cd-5bde-4b06-9ed2-f359f7ef8813)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9da158280c40673346527ad42dbf6dc7698bc56755548e995bc0284041415d20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6fd8694b95-qrwpz" podUID="f73da5cd-5bde-4b06-9ed2-f359f7ef8813" Jan 23 18:50:32.404344 systemd[1]: run-netns-cni\x2ddc1b78ad\x2d0ae0\x2dfd02\x2db919\x2d5efbdc6e5750.mount: Deactivated successfully. Jan 23 18:50:37.746741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033463225.mount: Deactivated successfully. Jan 23 18:50:37.771664 containerd[1630]: time="2026-01-23T18:50:37.771195566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:37.772027 containerd[1630]: time="2026-01-23T18:50:37.772011221Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 18:50:37.772583 containerd[1630]: time="2026-01-23T18:50:37.772562345Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:37.774061 containerd[1630]: time="2026-01-23T18:50:37.774044988Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:50:37.774400 containerd[1630]: time="2026-01-23T18:50:37.774365922Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.492456908s" Jan 23 18:50:37.774435 containerd[1630]: time="2026-01-23T18:50:37.774400213Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 18:50:37.801309 containerd[1630]: time="2026-01-23T18:50:37.801274469Z" level=info msg="CreateContainer within sandbox \"1e15e45a76b86c476c5e01729e12506e985b4f83ae728bdae179274720d3f3aa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 18:50:37.811694 containerd[1630]: time="2026-01-23T18:50:37.811633371Z" level=info msg="Container 24fb16cf0cbcb2d5963aa953af5069b588240e431c71787503c9cf85b44e2280: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:37.819383 containerd[1630]: time="2026-01-23T18:50:37.819341170Z" level=info msg="CreateContainer within sandbox \"1e15e45a76b86c476c5e01729e12506e985b4f83ae728bdae179274720d3f3aa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"24fb16cf0cbcb2d5963aa953af5069b588240e431c71787503c9cf85b44e2280\"" Jan 23 18:50:37.820915 containerd[1630]: time="2026-01-23T18:50:37.820873616Z" level=info msg="StartContainer for \"24fb16cf0cbcb2d5963aa953af5069b588240e431c71787503c9cf85b44e2280\"" Jan 23 18:50:37.822033 containerd[1630]: time="2026-01-23T18:50:37.822005334Z" level=info msg="connecting to shim 24fb16cf0cbcb2d5963aa953af5069b588240e431c71787503c9cf85b44e2280" address="unix:///run/containerd/s/93aef2cfb2525551fdf221b35d05ec91ee6ba6af2f5c2c7a73f94891705c8b69" protocol=ttrpc version=3 Jan 23 18:50:37.856606 systemd[1]: Started cri-containerd-24fb16cf0cbcb2d5963aa953af5069b588240e431c71787503c9cf85b44e2280.scope - libcontainer container 24fb16cf0cbcb2d5963aa953af5069b588240e431c71787503c9cf85b44e2280. Jan 23 18:50:37.960751 containerd[1630]: time="2026-01-23T18:50:37.960722180Z" level=info msg="StartContainer for \"24fb16cf0cbcb2d5963aa953af5069b588240e431c71787503c9cf85b44e2280\" returns successfully" Jan 23 18:50:38.080718 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 18:50:38.080873 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 18:50:38.320503 kubelet[2789]: I0123 18:50:38.320470 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-whisker-backend-key-pair\") pod \"f73da5cd-5bde-4b06-9ed2-f359f7ef8813\" (UID: \"f73da5cd-5bde-4b06-9ed2-f359f7ef8813\") " Jan 23 18:50:38.320503 kubelet[2789]: I0123 18:50:38.320499 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-whisker-ca-bundle\") pod \"f73da5cd-5bde-4b06-9ed2-f359f7ef8813\" (UID: \"f73da5cd-5bde-4b06-9ed2-f359f7ef8813\") " Jan 23 18:50:38.320503 kubelet[2789]: I0123 18:50:38.320513 2789 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nlb6\" (UniqueName: \"kubernetes.io/projected/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-kube-api-access-5nlb6\") pod \"f73da5cd-5bde-4b06-9ed2-f359f7ef8813\" (UID: \"f73da5cd-5bde-4b06-9ed2-f359f7ef8813\") " Jan 23 18:50:38.321130 kubelet[2789]: I0123 18:50:38.321043 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f73da5cd-5bde-4b06-9ed2-f359f7ef8813" (UID: "f73da5cd-5bde-4b06-9ed2-f359f7ef8813"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:50:38.327205 kubelet[2789]: I0123 18:50:38.326443 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-kube-api-access-5nlb6" (OuterVolumeSpecName: "kube-api-access-5nlb6") pod "f73da5cd-5bde-4b06-9ed2-f359f7ef8813" (UID: "f73da5cd-5bde-4b06-9ed2-f359f7ef8813"). InnerVolumeSpecName "kube-api-access-5nlb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:50:38.328221 kubelet[2789]: I0123 18:50:38.328111 2789 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f73da5cd-5bde-4b06-9ed2-f359f7ef8813" (UID: "f73da5cd-5bde-4b06-9ed2-f359f7ef8813"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:50:38.421747 kubelet[2789]: I0123 18:50:38.421705 2789 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-whisker-backend-key-pair\") on node \"ci-4459-2-3-1-de7581f71a\" DevicePath \"\"" Jan 23 18:50:38.421921 kubelet[2789]: I0123 18:50:38.421912 2789 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-whisker-ca-bundle\") on node \"ci-4459-2-3-1-de7581f71a\" DevicePath \"\"" Jan 23 18:50:38.421958 kubelet[2789]: I0123 18:50:38.421951 2789 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5nlb6\" (UniqueName: \"kubernetes.io/projected/f73da5cd-5bde-4b06-9ed2-f359f7ef8813-kube-api-access-5nlb6\") on node \"ci-4459-2-3-1-de7581f71a\" DevicePath \"\"" Jan 23 18:50:38.605123 systemd[1]: Removed slice kubepods-besteffort-podf73da5cd_5bde_4b06_9ed2_f359f7ef8813.slice - libcontainer container kubepods-besteffort-podf73da5cd_5bde_4b06_9ed2_f359f7ef8813.slice. Jan 23 18:50:38.624244 kubelet[2789]: I0123 18:50:38.623066 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5bxtv" podStartSLOduration=1.6584550359999999 podStartE2EDuration="20.623043313s" podCreationTimestamp="2026-01-23 18:50:18 +0000 UTC" firstStartedPulling="2026-01-23 18:50:18.810341678 +0000 UTC m=+19.746285847" lastFinishedPulling="2026-01-23 18:50:37.774929965 +0000 UTC m=+38.710874124" observedRunningTime="2026-01-23 18:50:38.32773292 +0000 UTC m=+39.263677089" watchObservedRunningTime="2026-01-23 18:50:38.623043313 +0000 UTC m=+39.558987512" Jan 23 18:50:38.689882 systemd[1]: Created slice kubepods-besteffort-podf81f9a64_9c09_4160_be5b_578a1ab2c98f.slice - libcontainer container kubepods-besteffort-podf81f9a64_9c09_4160_be5b_578a1ab2c98f.slice. Jan 23 18:50:38.724257 kubelet[2789]: I0123 18:50:38.724216 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zm6zb\" (UniqueName: \"kubernetes.io/projected/f81f9a64-9c09-4160-be5b-578a1ab2c98f-kube-api-access-zm6zb\") pod \"whisker-54d5d997cf-dvx46\" (UID: \"f81f9a64-9c09-4160-be5b-578a1ab2c98f\") " pod="calico-system/whisker-54d5d997cf-dvx46" Jan 23 18:50:38.724461 kubelet[2789]: I0123 18:50:38.724444 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f81f9a64-9c09-4160-be5b-578a1ab2c98f-whisker-backend-key-pair\") pod \"whisker-54d5d997cf-dvx46\" (UID: \"f81f9a64-9c09-4160-be5b-578a1ab2c98f\") " pod="calico-system/whisker-54d5d997cf-dvx46" Jan 23 18:50:38.724594 kubelet[2789]: I0123 18:50:38.724578 2789 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f81f9a64-9c09-4160-be5b-578a1ab2c98f-whisker-ca-bundle\") pod \"whisker-54d5d997cf-dvx46\" (UID: \"f81f9a64-9c09-4160-be5b-578a1ab2c98f\") " pod="calico-system/whisker-54d5d997cf-dvx46" Jan 23 18:50:38.751130 systemd[1]: var-lib-kubelet-pods-f73da5cd\x2d5bde\x2d4b06\x2d9ed2\x2df359f7ef8813-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 18:50:38.751609 systemd[1]: var-lib-kubelet-pods-f73da5cd\x2d5bde\x2d4b06\x2d9ed2\x2df359f7ef8813-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5nlb6.mount: Deactivated successfully. Jan 23 18:50:38.999324 containerd[1630]: time="2026-01-23T18:50:38.998715704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d5d997cf-dvx46,Uid:f81f9a64-9c09-4160-be5b-578a1ab2c98f,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:39.145786 kubelet[2789]: I0123 18:50:39.145042 2789 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f73da5cd-5bde-4b06-9ed2-f359f7ef8813" path="/var/lib/kubelet/pods/f73da5cd-5bde-4b06-9ed2-f359f7ef8813/volumes" Jan 23 18:50:39.225493 systemd-networkd[1515]: calif8ad05e1953: Link UP Jan 23 18:50:39.227116 systemd-networkd[1515]: calif8ad05e1953: Gained carrier Jan 23 18:50:39.256509 containerd[1630]: 2026-01-23 18:50:39.044 [INFO][3956] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:50:39.256509 containerd[1630]: 2026-01-23 18:50:39.085 [INFO][3956] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0 whisker-54d5d997cf- calico-system f81f9a64-9c09-4160-be5b-578a1ab2c98f 942 0 2026-01-23 18:50:38 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:54d5d997cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-3-1-de7581f71a whisker-54d5d997cf-dvx46 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif8ad05e1953 [] [] }} ContainerID="568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" Namespace="calico-system" Pod="whisker-54d5d997cf-dvx46" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-" Jan 23 18:50:39.256509 containerd[1630]: 2026-01-23 18:50:39.085 [INFO][3956] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" Namespace="calico-system" Pod="whisker-54d5d997cf-dvx46" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0" Jan 23 18:50:39.256509 containerd[1630]: 2026-01-23 18:50:39.143 [INFO][3969] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" HandleID="k8s-pod-network.568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" Workload="ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0" Jan 23 18:50:39.256983 containerd[1630]: 2026-01-23 18:50:39.145 [INFO][3969] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" HandleID="k8s-pod-network.568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" Workload="ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5560), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-1-de7581f71a", "pod":"whisker-54d5d997cf-dvx46", "timestamp":"2026-01-23 18:50:39.143144199 +0000 UTC"}, Hostname:"ci-4459-2-3-1-de7581f71a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:39.256983 containerd[1630]: 2026-01-23 18:50:39.145 [INFO][3969] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:39.256983 containerd[1630]: 2026-01-23 18:50:39.145 [INFO][3969] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:39.256983 containerd[1630]: 2026-01-23 18:50:39.145 [INFO][3969] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-de7581f71a' Jan 23 18:50:39.256983 containerd[1630]: 2026-01-23 18:50:39.155 [INFO][3969] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:39.256983 containerd[1630]: 2026-01-23 18:50:39.162 [INFO][3969] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:39.256983 containerd[1630]: 2026-01-23 18:50:39.168 [INFO][3969] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:39.256983 containerd[1630]: 2026-01-23 18:50:39.172 [INFO][3969] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:39.256983 containerd[1630]: 2026-01-23 18:50:39.176 [INFO][3969] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:39.257375 containerd[1630]: 2026-01-23 18:50:39.177 [INFO][3969] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:39.257375 containerd[1630]: 2026-01-23 18:50:39.179 [INFO][3969] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91 Jan 23 18:50:39.257375 containerd[1630]: 2026-01-23 18:50:39.185 [INFO][3969] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:39.257375 containerd[1630]: 2026-01-23 18:50:39.196 [INFO][3969] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.65/26] block=192.168.116.64/26 handle="k8s-pod-network.568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:39.257375 containerd[1630]: 2026-01-23 18:50:39.196 [INFO][3969] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.65/26] handle="k8s-pod-network.568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:39.257375 containerd[1630]: 2026-01-23 18:50:39.196 [INFO][3969] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:39.257375 containerd[1630]: 2026-01-23 18:50:39.196 [INFO][3969] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.65/26] IPv6=[] ContainerID="568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" HandleID="k8s-pod-network.568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" Workload="ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0" Jan 23 18:50:39.257938 containerd[1630]: 2026-01-23 18:50:39.207 [INFO][3956] cni-plugin/k8s.go 418: Populated endpoint ContainerID="568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" Namespace="calico-system" Pod="whisker-54d5d997cf-dvx46" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0", GenerateName:"whisker-54d5d997cf-", Namespace:"calico-system", SelfLink:"", UID:"f81f9a64-9c09-4160-be5b-578a1ab2c98f", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54d5d997cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"", Pod:"whisker-54d5d997cf-dvx46", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.116.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif8ad05e1953", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:39.257938 containerd[1630]: 2026-01-23 18:50:39.207 [INFO][3956] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.65/32] ContainerID="568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" Namespace="calico-system" Pod="whisker-54d5d997cf-dvx46" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0" Jan 23 18:50:39.258067 containerd[1630]: 2026-01-23 18:50:39.207 [INFO][3956] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif8ad05e1953 ContainerID="568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" Namespace="calico-system" Pod="whisker-54d5d997cf-dvx46" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0" Jan 23 18:50:39.258067 containerd[1630]: 2026-01-23 18:50:39.229 [INFO][3956] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" Namespace="calico-system" Pod="whisker-54d5d997cf-dvx46" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0" Jan 23 18:50:39.258151 containerd[1630]: 2026-01-23 18:50:39.229 [INFO][3956] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" Namespace="calico-system" Pod="whisker-54d5d997cf-dvx46" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0", GenerateName:"whisker-54d5d997cf-", Namespace:"calico-system", SelfLink:"", UID:"f81f9a64-9c09-4160-be5b-578a1ab2c98f", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"54d5d997cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91", Pod:"whisker-54d5d997cf-dvx46", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.116.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif8ad05e1953", MAC:"4e:fa:17:04:35:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:39.258327 containerd[1630]: 2026-01-23 18:50:39.251 [INFO][3956] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" Namespace="calico-system" Pod="whisker-54d5d997cf-dvx46" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-whisker--54d5d997cf--dvx46-eth0" Jan 23 18:50:39.335526 containerd[1630]: time="2026-01-23T18:50:39.335466507Z" level=info msg="connecting to shim 568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91" address="unix:///run/containerd/s/9480c4a5a8a11cb5b629b8f3c5381c7d5a6bcef825a55b87ad468be31359cb44" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:39.363346 systemd[1]: Started cri-containerd-568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91.scope - libcontainer container 568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91. Jan 23 18:50:39.430626 containerd[1630]: time="2026-01-23T18:50:39.430564823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54d5d997cf-dvx46,Uid:f81f9a64-9c09-4160-be5b-578a1ab2c98f,Namespace:calico-system,Attempt:0,} returns sandbox id \"568482b7d21e9c288533aaeaf924710ccbf6255e08b264e52930904d0935ac91\"" Jan 23 18:50:39.432719 containerd[1630]: time="2026-01-23T18:50:39.432697732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:50:39.893447 containerd[1630]: time="2026-01-23T18:50:39.893223262Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:39.895767 containerd[1630]: time="2026-01-23T18:50:39.895610932Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:50:39.895767 containerd[1630]: time="2026-01-23T18:50:39.895680805Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:50:39.896365 kubelet[2789]: E0123 18:50:39.896247 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:50:39.898540 kubelet[2789]: E0123 18:50:39.896467 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:50:39.906481 kubelet[2789]: E0123 18:50:39.905172 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dcb1e343f7a944f4a9269ea774e00d00,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zm6zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d5d997cf-dvx46_calico-system(f81f9a64-9c09-4160-be5b-578a1ab2c98f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:39.911478 containerd[1630]: time="2026-01-23T18:50:39.911406946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:50:39.991808 systemd-networkd[1515]: vxlan.calico: Link UP Jan 23 18:50:39.991820 systemd-networkd[1515]: vxlan.calico: Gained carrier Jan 23 18:50:40.382652 containerd[1630]: time="2026-01-23T18:50:40.382579060Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:40.384504 containerd[1630]: time="2026-01-23T18:50:40.384383853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:50:40.384504 containerd[1630]: time="2026-01-23T18:50:40.384512388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:50:40.384975 kubelet[2789]: E0123 18:50:40.384864 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:50:40.384975 kubelet[2789]: E0123 18:50:40.384931 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:50:40.385540 kubelet[2789]: E0123 18:50:40.385081 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm6zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d5d997cf-dvx46_calico-system(f81f9a64-9c09-4160-be5b-578a1ab2c98f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:40.386319 kubelet[2789]: E0123 18:50:40.386234 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:50:40.962427 systemd-networkd[1515]: calif8ad05e1953: Gained IPv6LL Jan 23 18:50:41.308520 kubelet[2789]: E0123 18:50:41.307404 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:50:41.730603 systemd-networkd[1515]: vxlan.calico: Gained IPv6LL Jan 23 18:50:42.140793 containerd[1630]: time="2026-01-23T18:50:42.140697087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s24fq,Uid:5859e809-2c43-4cb8-bea4-bfd09f8d4f24,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:42.141672 containerd[1630]: time="2026-01-23T18:50:42.141268504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76b4596f9f-n7xpc,Uid:18c41309-8840-4c5c-a0e6-d6fb41f37c90,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:50:42.333645 systemd-networkd[1515]: cali6dda668edb6: Link UP Jan 23 18:50:42.335463 systemd-networkd[1515]: cali6dda668edb6: Gained carrier Jan 23 18:50:42.355915 containerd[1630]: 2026-01-23 18:50:42.230 [INFO][4249] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0 calico-apiserver-76b4596f9f- calico-apiserver 18c41309-8840-4c5c-a0e6-d6fb41f37c90 866 0 2026-01-23 18:50:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76b4596f9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-1-de7581f71a calico-apiserver-76b4596f9f-n7xpc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6dda668edb6 [] [] }} ContainerID="a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" Namespace="calico-apiserver" Pod="calico-apiserver-76b4596f9f-n7xpc" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-" Jan 23 18:50:42.355915 containerd[1630]: 2026-01-23 18:50:42.230 [INFO][4249] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" Namespace="calico-apiserver" Pod="calico-apiserver-76b4596f9f-n7xpc" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0" Jan 23 18:50:42.355915 containerd[1630]: 2026-01-23 18:50:42.285 [INFO][4272] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" HandleID="k8s-pod-network.a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0" Jan 23 18:50:42.356406 containerd[1630]: 2026-01-23 18:50:42.285 [INFO][4272] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" HandleID="k8s-pod-network.a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001006f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-1-de7581f71a", "pod":"calico-apiserver-76b4596f9f-n7xpc", "timestamp":"2026-01-23 18:50:42.285178163 +0000 UTC"}, Hostname:"ci-4459-2-3-1-de7581f71a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:42.356406 containerd[1630]: 2026-01-23 18:50:42.285 [INFO][4272] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:42.356406 containerd[1630]: 2026-01-23 18:50:42.285 [INFO][4272] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:42.356406 containerd[1630]: 2026-01-23 18:50:42.285 [INFO][4272] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-de7581f71a' Jan 23 18:50:42.356406 containerd[1630]: 2026-01-23 18:50:42.295 [INFO][4272] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.356406 containerd[1630]: 2026-01-23 18:50:42.302 [INFO][4272] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.356406 containerd[1630]: 2026-01-23 18:50:42.307 [INFO][4272] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.356406 containerd[1630]: 2026-01-23 18:50:42.309 [INFO][4272] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.356406 containerd[1630]: 2026-01-23 18:50:42.311 [INFO][4272] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.356718 containerd[1630]: 2026-01-23 18:50:42.312 [INFO][4272] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.356718 containerd[1630]: 2026-01-23 18:50:42.313 [INFO][4272] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e Jan 23 18:50:42.356718 containerd[1630]: 2026-01-23 18:50:42.317 [INFO][4272] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.356718 containerd[1630]: 2026-01-23 18:50:42.323 [INFO][4272] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.66/26] block=192.168.116.64/26 handle="k8s-pod-network.a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.356718 containerd[1630]: 2026-01-23 18:50:42.324 [INFO][4272] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.66/26] handle="k8s-pod-network.a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.356718 containerd[1630]: 2026-01-23 18:50:42.324 [INFO][4272] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:42.356718 containerd[1630]: 2026-01-23 18:50:42.324 [INFO][4272] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.66/26] IPv6=[] ContainerID="a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" HandleID="k8s-pod-network.a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0" Jan 23 18:50:42.356975 containerd[1630]: 2026-01-23 18:50:42.327 [INFO][4249] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" Namespace="calico-apiserver" Pod="calico-apiserver-76b4596f9f-n7xpc" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0", GenerateName:"calico-apiserver-76b4596f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"18c41309-8840-4c5c-a0e6-d6fb41f37c90", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76b4596f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"", Pod:"calico-apiserver-76b4596f9f-n7xpc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6dda668edb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:42.357059 containerd[1630]: 2026-01-23 18:50:42.327 [INFO][4249] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.66/32] ContainerID="a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" Namespace="calico-apiserver" Pod="calico-apiserver-76b4596f9f-n7xpc" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0" Jan 23 18:50:42.357059 containerd[1630]: 2026-01-23 18:50:42.327 [INFO][4249] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6dda668edb6 ContainerID="a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" Namespace="calico-apiserver" Pod="calico-apiserver-76b4596f9f-n7xpc" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0" Jan 23 18:50:42.357059 containerd[1630]: 2026-01-23 18:50:42.337 [INFO][4249] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" Namespace="calico-apiserver" Pod="calico-apiserver-76b4596f9f-n7xpc" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0" Jan 23 18:50:42.357955 containerd[1630]: 2026-01-23 18:50:42.338 [INFO][4249] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" Namespace="calico-apiserver" Pod="calico-apiserver-76b4596f9f-n7xpc" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0", GenerateName:"calico-apiserver-76b4596f9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"18c41309-8840-4c5c-a0e6-d6fb41f37c90", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76b4596f9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e", Pod:"calico-apiserver-76b4596f9f-n7xpc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6dda668edb6", MAC:"9e:c1:e2:39:09:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:42.359077 containerd[1630]: 2026-01-23 18:50:42.351 [INFO][4249] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" Namespace="calico-apiserver" Pod="calico-apiserver-76b4596f9f-n7xpc" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--76b4596f9f--n7xpc-eth0" Jan 23 18:50:42.383212 containerd[1630]: time="2026-01-23T18:50:42.383135720Z" level=info msg="connecting to shim a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e" address="unix:///run/containerd/s/2a8d08de0d3f353cf378801e8c0a23e2898bb1d1ec0c3e4ab5f38c939292d00b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:42.421305 systemd[1]: Started cri-containerd-a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e.scope - libcontainer container a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e. Jan 23 18:50:42.431396 systemd-networkd[1515]: calia7072a0b2b7: Link UP Jan 23 18:50:42.431747 systemd-networkd[1515]: calia7072a0b2b7: Gained carrier Jan 23 18:50:42.458092 containerd[1630]: 2026-01-23 18:50:42.236 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0 coredns-674b8bbfcf- kube-system 5859e809-2c43-4cb8-bea4-bfd09f8d4f24 870 0 2026-01-23 18:50:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-1-de7581f71a coredns-674b8bbfcf-s24fq eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia7072a0b2b7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" Namespace="kube-system" Pod="coredns-674b8bbfcf-s24fq" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-" Jan 23 18:50:42.458092 containerd[1630]: 2026-01-23 18:50:42.237 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" Namespace="kube-system" Pod="coredns-674b8bbfcf-s24fq" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0" Jan 23 18:50:42.458092 containerd[1630]: 2026-01-23 18:50:42.293 [INFO][4277] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" HandleID="k8s-pod-network.a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" Workload="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0" Jan 23 18:50:42.458336 containerd[1630]: 2026-01-23 18:50:42.293 [INFO][4277] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" HandleID="k8s-pod-network.a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" Workload="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001031f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-1-de7581f71a", "pod":"coredns-674b8bbfcf-s24fq", "timestamp":"2026-01-23 18:50:42.293511643 +0000 UTC"}, Hostname:"ci-4459-2-3-1-de7581f71a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:42.458336 containerd[1630]: 2026-01-23 18:50:42.293 [INFO][4277] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:42.458336 containerd[1630]: 2026-01-23 18:50:42.324 [INFO][4277] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:42.458336 containerd[1630]: 2026-01-23 18:50:42.324 [INFO][4277] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-de7581f71a' Jan 23 18:50:42.458336 containerd[1630]: 2026-01-23 18:50:42.397 [INFO][4277] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.458336 containerd[1630]: 2026-01-23 18:50:42.403 [INFO][4277] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.458336 containerd[1630]: 2026-01-23 18:50:42.408 [INFO][4277] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.458336 containerd[1630]: 2026-01-23 18:50:42.411 [INFO][4277] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.458336 containerd[1630]: 2026-01-23 18:50:42.413 [INFO][4277] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.458627 containerd[1630]: 2026-01-23 18:50:42.413 [INFO][4277] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.458627 containerd[1630]: 2026-01-23 18:50:42.414 [INFO][4277] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6 Jan 23 18:50:42.458627 containerd[1630]: 2026-01-23 18:50:42.418 [INFO][4277] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.458627 containerd[1630]: 2026-01-23 18:50:42.424 [INFO][4277] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.67/26] block=192.168.116.64/26 handle="k8s-pod-network.a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.458627 containerd[1630]: 2026-01-23 18:50:42.424 [INFO][4277] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.67/26] handle="k8s-pod-network.a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:42.458627 containerd[1630]: 2026-01-23 18:50:42.424 [INFO][4277] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:42.458627 containerd[1630]: 2026-01-23 18:50:42.425 [INFO][4277] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.67/26] IPv6=[] ContainerID="a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" HandleID="k8s-pod-network.a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" Workload="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0" Jan 23 18:50:42.458833 containerd[1630]: 2026-01-23 18:50:42.428 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" Namespace="kube-system" Pod="coredns-674b8bbfcf-s24fq" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5859e809-2c43-4cb8-bea4-bfd09f8d4f24", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"", Pod:"coredns-674b8bbfcf-s24fq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7072a0b2b7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:42.458833 containerd[1630]: 2026-01-23 18:50:42.428 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.67/32] ContainerID="a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" Namespace="kube-system" Pod="coredns-674b8bbfcf-s24fq" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0" Jan 23 18:50:42.458833 containerd[1630]: 2026-01-23 18:50:42.428 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia7072a0b2b7 ContainerID="a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" Namespace="kube-system" Pod="coredns-674b8bbfcf-s24fq" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0" Jan 23 18:50:42.458833 containerd[1630]: 2026-01-23 18:50:42.434 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" Namespace="kube-system" Pod="coredns-674b8bbfcf-s24fq" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0" Jan 23 18:50:42.458833 containerd[1630]: 2026-01-23 18:50:42.436 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" Namespace="kube-system" Pod="coredns-674b8bbfcf-s24fq" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5859e809-2c43-4cb8-bea4-bfd09f8d4f24", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6", Pod:"coredns-674b8bbfcf-s24fq", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia7072a0b2b7", MAC:"ee:cc:a2:41:ac:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:42.458833 containerd[1630]: 2026-01-23 18:50:42.450 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" Namespace="kube-system" Pod="coredns-674b8bbfcf-s24fq" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--s24fq-eth0" Jan 23 18:50:42.498207 containerd[1630]: time="2026-01-23T18:50:42.498108181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76b4596f9f-n7xpc,Uid:18c41309-8840-4c5c-a0e6-d6fb41f37c90,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"a949eb8a98091f31e776f79741adbd43e32e4cd867a3811fff52419068182b0e\"" Jan 23 18:50:42.501536 containerd[1630]: time="2026-01-23T18:50:42.501458702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:50:42.504647 containerd[1630]: time="2026-01-23T18:50:42.504627370Z" level=info msg="connecting to shim a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6" address="unix:///run/containerd/s/7fcbe9bb7d998260d66e79babf07f1edbd98e72d48c3a186834e19d83d0fcc02" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:42.523295 systemd[1]: Started cri-containerd-a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6.scope - libcontainer container a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6. Jan 23 18:50:42.565806 containerd[1630]: time="2026-01-23T18:50:42.565758914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s24fq,Uid:5859e809-2c43-4cb8-bea4-bfd09f8d4f24,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6\"" Jan 23 18:50:42.569585 containerd[1630]: time="2026-01-23T18:50:42.569556270Z" level=info msg="CreateContainer within sandbox \"a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:50:42.576928 containerd[1630]: time="2026-01-23T18:50:42.576696256Z" level=info msg="Container 9641d7a014aad70357751aecc40cfffd6d7d2279adef4abc01e6d6f986d5aca7: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:42.581005 containerd[1630]: time="2026-01-23T18:50:42.580978517Z" level=info msg="CreateContainer within sandbox \"a5e20657045abfff5e134925451d571b61e5c822070530a446b3bc390ecb5fc6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9641d7a014aad70357751aecc40cfffd6d7d2279adef4abc01e6d6f986d5aca7\"" Jan 23 18:50:42.582298 containerd[1630]: time="2026-01-23T18:50:42.582272622Z" level=info msg="StartContainer for \"9641d7a014aad70357751aecc40cfffd6d7d2279adef4abc01e6d6f986d5aca7\"" Jan 23 18:50:42.583034 containerd[1630]: time="2026-01-23T18:50:42.582836099Z" level=info msg="connecting to shim 9641d7a014aad70357751aecc40cfffd6d7d2279adef4abc01e6d6f986d5aca7" address="unix:///run/containerd/s/7fcbe9bb7d998260d66e79babf07f1edbd98e72d48c3a186834e19d83d0fcc02" protocol=ttrpc version=3 Jan 23 18:50:42.600329 systemd[1]: Started cri-containerd-9641d7a014aad70357751aecc40cfffd6d7d2279adef4abc01e6d6f986d5aca7.scope - libcontainer container 9641d7a014aad70357751aecc40cfffd6d7d2279adef4abc01e6d6f986d5aca7. Jan 23 18:50:42.624893 containerd[1630]: time="2026-01-23T18:50:42.624849824Z" level=info msg="StartContainer for \"9641d7a014aad70357751aecc40cfffd6d7d2279adef4abc01e6d6f986d5aca7\" returns successfully" Jan 23 18:50:42.945669 containerd[1630]: time="2026-01-23T18:50:42.945600957Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:42.947344 containerd[1630]: time="2026-01-23T18:50:42.947225947Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:50:42.947955 containerd[1630]: time="2026-01-23T18:50:42.947233317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:50:42.948035 kubelet[2789]: E0123 18:50:42.947613 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:42.948035 kubelet[2789]: E0123 18:50:42.947670 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:42.948035 kubelet[2789]: E0123 18:50:42.947877 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68gkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76b4596f9f-n7xpc_calico-apiserver(18c41309-8840-4c5c-a0e6-d6fb41f37c90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:42.949956 kubelet[2789]: E0123 18:50:42.949338 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:50:43.141217 containerd[1630]: time="2026-01-23T18:50:43.140757848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68dbfc5dfc-dffpg,Uid:c91754db-1ae1-406f-a4ae-966042e218eb,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:43.144158 containerd[1630]: time="2026-01-23T18:50:43.143711583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vr9zx,Uid:d16a8768-4465-40dd-b156-fd821a42a741,Namespace:kube-system,Attempt:0,}" Jan 23 18:50:43.320556 kubelet[2789]: E0123 18:50:43.319566 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:50:43.350363 systemd-networkd[1515]: calibf2a5548101: Link UP Jan 23 18:50:43.352235 systemd-networkd[1515]: calibf2a5548101: Gained carrier Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.227 [INFO][4428] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0 calico-kube-controllers-68dbfc5dfc- calico-system c91754db-1ae1-406f-a4ae-966042e218eb 864 0 2026-01-23 18:50:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:68dbfc5dfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-3-1-de7581f71a calico-kube-controllers-68dbfc5dfc-dffpg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibf2a5548101 [] [] }} ContainerID="d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" Namespace="calico-system" Pod="calico-kube-controllers-68dbfc5dfc-dffpg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.227 [INFO][4428] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" Namespace="calico-system" Pod="calico-kube-controllers-68dbfc5dfc-dffpg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.279 [INFO][4453] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" HandleID="k8s-pod-network.d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.279 [INFO][4453] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" HandleID="k8s-pod-network.d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-1-de7581f71a", "pod":"calico-kube-controllers-68dbfc5dfc-dffpg", "timestamp":"2026-01-23 18:50:43.278996947 +0000 UTC"}, Hostname:"ci-4459-2-3-1-de7581f71a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.279 [INFO][4453] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.279 [INFO][4453] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.279 [INFO][4453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-de7581f71a' Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.289 [INFO][4453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.294 [INFO][4453] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.299 [INFO][4453] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.302 [INFO][4453] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.306 [INFO][4453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.306 [INFO][4453] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.309 [INFO][4453] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234 Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.315 [INFO][4453] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.325 [INFO][4453] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.68/26] block=192.168.116.64/26 handle="k8s-pod-network.d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.325 [INFO][4453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.68/26] handle="k8s-pod-network.d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.325 [INFO][4453] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:43.371637 containerd[1630]: 2026-01-23 18:50:43.326 [INFO][4453] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.68/26] IPv6=[] ContainerID="d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" HandleID="k8s-pod-network.d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0" Jan 23 18:50:43.373939 containerd[1630]: 2026-01-23 18:50:43.331 [INFO][4428] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" Namespace="calico-system" Pod="calico-kube-controllers-68dbfc5dfc-dffpg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0", GenerateName:"calico-kube-controllers-68dbfc5dfc-", Namespace:"calico-system", SelfLink:"", UID:"c91754db-1ae1-406f-a4ae-966042e218eb", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68dbfc5dfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"", Pod:"calico-kube-controllers-68dbfc5dfc-dffpg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf2a5548101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:43.373939 containerd[1630]: 2026-01-23 18:50:43.331 [INFO][4428] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.68/32] ContainerID="d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" Namespace="calico-system" Pod="calico-kube-controllers-68dbfc5dfc-dffpg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0" Jan 23 18:50:43.373939 containerd[1630]: 2026-01-23 18:50:43.331 [INFO][4428] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibf2a5548101 ContainerID="d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" Namespace="calico-system" Pod="calico-kube-controllers-68dbfc5dfc-dffpg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0" Jan 23 18:50:43.373939 containerd[1630]: 2026-01-23 18:50:43.350 [INFO][4428] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" Namespace="calico-system" Pod="calico-kube-controllers-68dbfc5dfc-dffpg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0" Jan 23 18:50:43.373939 containerd[1630]: 2026-01-23 18:50:43.353 [INFO][4428] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" Namespace="calico-system" Pod="calico-kube-controllers-68dbfc5dfc-dffpg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0", GenerateName:"calico-kube-controllers-68dbfc5dfc-", Namespace:"calico-system", SelfLink:"", UID:"c91754db-1ae1-406f-a4ae-966042e218eb", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"68dbfc5dfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234", Pod:"calico-kube-controllers-68dbfc5dfc-dffpg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.116.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibf2a5548101", MAC:"f6:12:ad:62:72:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:43.373939 containerd[1630]: 2026-01-23 18:50:43.366 [INFO][4428] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" Namespace="calico-system" Pod="calico-kube-controllers-68dbfc5dfc-dffpg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--kube--controllers--68dbfc5dfc--dffpg-eth0" Jan 23 18:50:43.395343 systemd-networkd[1515]: cali6dda668edb6: Gained IPv6LL Jan 23 18:50:43.418447 containerd[1630]: time="2026-01-23T18:50:43.418323268Z" level=info msg="connecting to shim d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234" address="unix:///run/containerd/s/1a9d777dea0cdce447af289d7639db013605f94de9946b08f90f51ee3e251612" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:43.445620 systemd-networkd[1515]: calia9fb91762d3: Link UP Jan 23 18:50:43.447298 systemd-networkd[1515]: calia9fb91762d3: Gained carrier Jan 23 18:50:43.460367 systemd[1]: Started cri-containerd-d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234.scope - libcontainer container d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234. Jan 23 18:50:43.461009 kubelet[2789]: I0123 18:50:43.460961 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s24fq" podStartSLOduration=39.460945456 podStartE2EDuration="39.460945456s" podCreationTimestamp="2026-01-23 18:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:43.371362778 +0000 UTC m=+44.307306937" watchObservedRunningTime="2026-01-23 18:50:43.460945456 +0000 UTC m=+44.396889615" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.229 [INFO][4430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0 coredns-674b8bbfcf- kube-system d16a8768-4465-40dd-b156-fd821a42a741 874 0 2026-01-23 18:50:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-3-1-de7581f71a coredns-674b8bbfcf-vr9zx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia9fb91762d3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" Namespace="kube-system" Pod="coredns-674b8bbfcf-vr9zx" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.229 [INFO][4430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" Namespace="kube-system" Pod="coredns-674b8bbfcf-vr9zx" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.280 [INFO][4455] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" HandleID="k8s-pod-network.dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" Workload="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.281 [INFO][4455] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" HandleID="k8s-pod-network.dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" Workload="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-3-1-de7581f71a", "pod":"coredns-674b8bbfcf-vr9zx", "timestamp":"2026-01-23 18:50:43.280357623 +0000 UTC"}, Hostname:"ci-4459-2-3-1-de7581f71a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.281 [INFO][4455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.326 [INFO][4455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.326 [INFO][4455] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-de7581f71a' Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.392 [INFO][4455] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.407 [INFO][4455] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.412 [INFO][4455] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.414 [INFO][4455] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.416 [INFO][4455] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.417 [INFO][4455] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.418 [INFO][4455] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40 Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.424 [INFO][4455] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.435 [INFO][4455] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.69/26] block=192.168.116.64/26 handle="k8s-pod-network.dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.435 [INFO][4455] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.69/26] handle="k8s-pod-network.dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.435 [INFO][4455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:43.464143 containerd[1630]: 2026-01-23 18:50:43.435 [INFO][4455] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.69/26] IPv6=[] ContainerID="dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" HandleID="k8s-pod-network.dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" Workload="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0" Jan 23 18:50:43.464624 containerd[1630]: 2026-01-23 18:50:43.440 [INFO][4430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" Namespace="kube-system" Pod="coredns-674b8bbfcf-vr9zx" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d16a8768-4465-40dd-b156-fd821a42a741", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"", Pod:"coredns-674b8bbfcf-vr9zx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9fb91762d3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:43.464624 containerd[1630]: 2026-01-23 18:50:43.440 [INFO][4430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.69/32] ContainerID="dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" Namespace="kube-system" Pod="coredns-674b8bbfcf-vr9zx" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0" Jan 23 18:50:43.464624 containerd[1630]: 2026-01-23 18:50:43.440 [INFO][4430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9fb91762d3 ContainerID="dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" Namespace="kube-system" Pod="coredns-674b8bbfcf-vr9zx" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0" Jan 23 18:50:43.464624 containerd[1630]: 2026-01-23 18:50:43.447 [INFO][4430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" Namespace="kube-system" Pod="coredns-674b8bbfcf-vr9zx" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0" Jan 23 18:50:43.464624 containerd[1630]: 2026-01-23 18:50:43.449 [INFO][4430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" Namespace="kube-system" Pod="coredns-674b8bbfcf-vr9zx" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d16a8768-4465-40dd-b156-fd821a42a741", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40", Pod:"coredns-674b8bbfcf-vr9zx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.116.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia9fb91762d3", MAC:"7a:ee:e4:9b:df:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:43.464624 containerd[1630]: 2026-01-23 18:50:43.459 [INFO][4430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" Namespace="kube-system" Pod="coredns-674b8bbfcf-vr9zx" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-coredns--674b8bbfcf--vr9zx-eth0" Jan 23 18:50:43.488043 containerd[1630]: time="2026-01-23T18:50:43.487418046Z" level=info msg="connecting to shim dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40" address="unix:///run/containerd/s/d8457d2f87c788c818895c03aaaa140144e21b7fc136890035aee9c75678a51e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:43.515401 systemd[1]: Started cri-containerd-dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40.scope - libcontainer container dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40. Jan 23 18:50:43.528495 containerd[1630]: time="2026-01-23T18:50:43.528470266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-68dbfc5dfc-dffpg,Uid:c91754db-1ae1-406f-a4ae-966042e218eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"d98de17dfbaa6106785f2c3c9d723b07003e6911bca146a3270dd6aa96b60234\"" Jan 23 18:50:43.530602 containerd[1630]: time="2026-01-23T18:50:43.530460880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:50:43.564917 containerd[1630]: time="2026-01-23T18:50:43.564881083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vr9zx,Uid:d16a8768-4465-40dd-b156-fd821a42a741,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40\"" Jan 23 18:50:43.569154 containerd[1630]: time="2026-01-23T18:50:43.569137182Z" level=info msg="CreateContainer within sandbox \"dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:50:43.578614 containerd[1630]: time="2026-01-23T18:50:43.577803214Z" level=info msg="Container c9c7adcebc9f4faeb6dfd287c0207d72a59adf715a739ab432caf940e431f657: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:50:43.583335 containerd[1630]: time="2026-01-23T18:50:43.583309818Z" level=info msg="CreateContainer within sandbox \"dfa009503c4834bbaf1143475819666829b5df5379cc847aaac15285e5bfbc40\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c9c7adcebc9f4faeb6dfd287c0207d72a59adf715a739ab432caf940e431f657\"" Jan 23 18:50:43.584040 containerd[1630]: time="2026-01-23T18:50:43.583649792Z" level=info msg="StartContainer for \"c9c7adcebc9f4faeb6dfd287c0207d72a59adf715a739ab432caf940e431f657\"" Jan 23 18:50:43.584307 containerd[1630]: time="2026-01-23T18:50:43.584259350Z" level=info msg="connecting to shim c9c7adcebc9f4faeb6dfd287c0207d72a59adf715a739ab432caf940e431f657" address="unix:///run/containerd/s/d8457d2f87c788c818895c03aaaa140144e21b7fc136890035aee9c75678a51e" protocol=ttrpc version=3 Jan 23 18:50:43.606308 systemd[1]: Started cri-containerd-c9c7adcebc9f4faeb6dfd287c0207d72a59adf715a739ab432caf940e431f657.scope - libcontainer container c9c7adcebc9f4faeb6dfd287c0207d72a59adf715a739ab432caf940e431f657. Jan 23 18:50:43.633496 containerd[1630]: time="2026-01-23T18:50:43.633469776Z" level=info msg="StartContainer for \"c9c7adcebc9f4faeb6dfd287c0207d72a59adf715a739ab432caf940e431f657\" returns successfully" Jan 23 18:50:43.714384 systemd-networkd[1515]: calia7072a0b2b7: Gained IPv6LL Jan 23 18:50:43.972397 containerd[1630]: time="2026-01-23T18:50:43.972297231Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:43.974082 containerd[1630]: time="2026-01-23T18:50:43.974026501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:50:43.974268 containerd[1630]: time="2026-01-23T18:50:43.974151833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:50:43.974478 kubelet[2789]: E0123 18:50:43.974381 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:50:43.974478 kubelet[2789]: E0123 18:50:43.974444 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:50:43.975756 kubelet[2789]: E0123 18:50:43.975253 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtm9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68dbfc5dfc-dffpg_calico-system(c91754db-1ae1-406f-a4ae-966042e218eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:43.976689 kubelet[2789]: E0123 18:50:43.976614 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:50:44.141127 containerd[1630]: time="2026-01-23T18:50:44.141015804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl4mg,Uid:2a5fa6c0-649d-4612-b31e-23030250d313,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:44.141670 containerd[1630]: time="2026-01-23T18:50:44.141624390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9pqgl,Uid:10031b0c-bbc2-4f8e-9df2-1b6971eda033,Namespace:calico-system,Attempt:0,}" Jan 23 18:50:44.144082 containerd[1630]: time="2026-01-23T18:50:44.144033107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7f687658-l86zs,Uid:7596438c-5371-4696-b02d-1c3d820234e2,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:50:44.315082 systemd-networkd[1515]: cali2085f637bc9: Link UP Jan 23 18:50:44.318351 systemd-networkd[1515]: cali2085f637bc9: Gained carrier Jan 23 18:50:44.328099 kubelet[2789]: E0123 18:50:44.328073 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:50:44.331747 kubelet[2789]: E0123 18:50:44.331656 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.240 [INFO][4625] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0 csi-node-driver- calico-system 2a5fa6c0-649d-4612-b31e-23030250d313 753 0 2026-01-23 18:50:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-3-1-de7581f71a csi-node-driver-hl4mg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali2085f637bc9 [] [] }} ContainerID="8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" Namespace="calico-system" Pod="csi-node-driver-hl4mg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.240 [INFO][4625] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" Namespace="calico-system" Pod="csi-node-driver-hl4mg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.273 [INFO][4658] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" HandleID="k8s-pod-network.8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" Workload="ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.275 [INFO][4658] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" HandleID="k8s-pod-network.8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" Workload="ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b7370), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-1-de7581f71a", "pod":"csi-node-driver-hl4mg", "timestamp":"2026-01-23 18:50:44.273767887 +0000 UTC"}, Hostname:"ci-4459-2-3-1-de7581f71a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.275 [INFO][4658] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.275 [INFO][4658] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.275 [INFO][4658] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-de7581f71a' Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.288 [INFO][4658] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.291 [INFO][4658] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.295 [INFO][4658] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.296 [INFO][4658] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.298 [INFO][4658] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.298 [INFO][4658] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.299 [INFO][4658] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27 Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.303 [INFO][4658] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.307 [INFO][4658] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.70/26] block=192.168.116.64/26 handle="k8s-pod-network.8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.307 [INFO][4658] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.70/26] handle="k8s-pod-network.8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.307 [INFO][4658] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:44.338733 containerd[1630]: 2026-01-23 18:50:44.307 [INFO][4658] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.70/26] IPv6=[] ContainerID="8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" HandleID="k8s-pod-network.8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" Workload="ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0" Jan 23 18:50:44.339267 containerd[1630]: 2026-01-23 18:50:44.310 [INFO][4625] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" Namespace="calico-system" Pod="csi-node-driver-hl4mg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2a5fa6c0-649d-4612-b31e-23030250d313", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"", Pod:"csi-node-driver-hl4mg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2085f637bc9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:44.339267 containerd[1630]: 2026-01-23 18:50:44.310 [INFO][4625] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.70/32] ContainerID="8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" Namespace="calico-system" Pod="csi-node-driver-hl4mg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0" Jan 23 18:50:44.339267 containerd[1630]: 2026-01-23 18:50:44.310 [INFO][4625] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2085f637bc9 ContainerID="8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" Namespace="calico-system" Pod="csi-node-driver-hl4mg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0" Jan 23 18:50:44.339267 containerd[1630]: 2026-01-23 18:50:44.323 [INFO][4625] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" Namespace="calico-system" Pod="csi-node-driver-hl4mg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0" Jan 23 18:50:44.339267 containerd[1630]: 2026-01-23 18:50:44.324 [INFO][4625] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" Namespace="calico-system" Pod="csi-node-driver-hl4mg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2a5fa6c0-649d-4612-b31e-23030250d313", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27", Pod:"csi-node-driver-hl4mg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.116.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali2085f637bc9", MAC:"32:6e:42:8f:fa:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:44.339267 containerd[1630]: 2026-01-23 18:50:44.334 [INFO][4625] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" Namespace="calico-system" Pod="csi-node-driver-hl4mg" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-csi--node--driver--hl4mg-eth0" Jan 23 18:50:44.371855 containerd[1630]: time="2026-01-23T18:50:44.369955203Z" level=info msg="connecting to shim 8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27" address="unix:///run/containerd/s/81d6a53bd53123e41e18dc0adfca319499e0481e1803ab3a14e9204c44ebe329" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:44.371972 kubelet[2789]: I0123 18:50:44.371044 2789 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vr9zx" podStartSLOduration=40.371029796 podStartE2EDuration="40.371029796s" podCreationTimestamp="2026-01-23 18:50:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:50:44.37050007 +0000 UTC m=+45.306444239" watchObservedRunningTime="2026-01-23 18:50:44.371029796 +0000 UTC m=+45.306973955" Jan 23 18:50:44.415308 systemd[1]: Started cri-containerd-8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27.scope - libcontainer container 8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27. Jan 23 18:50:44.441693 systemd-networkd[1515]: cali29db79bee3e: Link UP Jan 23 18:50:44.442740 systemd-networkd[1515]: cali29db79bee3e: Gained carrier Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.239 [INFO][4618] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0 calico-apiserver-b7f687658- calico-apiserver 7596438c-5371-4696-b02d-1c3d820234e2 877 0 2026-01-23 18:50:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b7f687658 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-1-de7581f71a calico-apiserver-b7f687658-l86zs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali29db79bee3e [] [] }} ContainerID="3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-l86zs" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.240 [INFO][4618] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-l86zs" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.283 [INFO][4666] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" HandleID="k8s-pod-network.3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.283 [INFO][4666] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" HandleID="k8s-pod-network.3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f230), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-1-de7581f71a", "pod":"calico-apiserver-b7f687658-l86zs", "timestamp":"2026-01-23 18:50:44.283705761 +0000 UTC"}, Hostname:"ci-4459-2-3-1-de7581f71a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.283 [INFO][4666] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.308 [INFO][4666] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.308 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-de7581f71a' Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.390 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.401 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.405 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.407 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.411 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.411 [INFO][4666] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.414 [INFO][4666] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14 Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.424 [INFO][4666] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.429 [INFO][4666] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.71/26] block=192.168.116.64/26 handle="k8s-pod-network.3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.429 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.71/26] handle="k8s-pod-network.3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.429 [INFO][4666] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:44.459349 containerd[1630]: 2026-01-23 18:50:44.429 [INFO][4666] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.71/26] IPv6=[] ContainerID="3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" HandleID="k8s-pod-network.3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0" Jan 23 18:50:44.459789 containerd[1630]: 2026-01-23 18:50:44.432 [INFO][4618] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-l86zs" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0", GenerateName:"calico-apiserver-b7f687658-", Namespace:"calico-apiserver", SelfLink:"", UID:"7596438c-5371-4696-b02d-1c3d820234e2", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7f687658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"", Pod:"calico-apiserver-b7f687658-l86zs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29db79bee3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:44.459789 containerd[1630]: 2026-01-23 18:50:44.432 [INFO][4618] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.71/32] ContainerID="3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-l86zs" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0" Jan 23 18:50:44.459789 containerd[1630]: 2026-01-23 18:50:44.433 [INFO][4618] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29db79bee3e ContainerID="3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-l86zs" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0" Jan 23 18:50:44.459789 containerd[1630]: 2026-01-23 18:50:44.442 [INFO][4618] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-l86zs" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0" Jan 23 18:50:44.459789 containerd[1630]: 2026-01-23 18:50:44.442 [INFO][4618] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-l86zs" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0", GenerateName:"calico-apiserver-b7f687658-", Namespace:"calico-apiserver", SelfLink:"", UID:"7596438c-5371-4696-b02d-1c3d820234e2", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7f687658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14", Pod:"calico-apiserver-b7f687658-l86zs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29db79bee3e", MAC:"de:78:cf:2d:f3:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:44.459789 containerd[1630]: 2026-01-23 18:50:44.456 [INFO][4618] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-l86zs" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--l86zs-eth0" Jan 23 18:50:44.466366 containerd[1630]: time="2026-01-23T18:50:44.466307683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hl4mg,Uid:2a5fa6c0-649d-4612-b31e-23030250d313,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fe3277d8e1df2adf82716a7feef9e707ee24fcc4fdf69397d973f01fb13fd27\"" Jan 23 18:50:44.468679 containerd[1630]: time="2026-01-23T18:50:44.468651660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:50:44.487370 containerd[1630]: time="2026-01-23T18:50:44.487280372Z" level=info msg="connecting to shim 3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14" address="unix:///run/containerd/s/6764a1c414d6b04cecec5162048143be28fb0dd79c0521d28d57096fb57beb59" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:44.509378 systemd[1]: Started cri-containerd-3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14.scope - libcontainer container 3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14. Jan 23 18:50:44.533240 systemd-networkd[1515]: cali515c466adc2: Link UP Jan 23 18:50:44.534168 systemd-networkd[1515]: cali515c466adc2: Gained carrier Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.241 [INFO][4627] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0 goldmane-666569f655- calico-system 10031b0c-bbc2-4f8e-9df2-1b6971eda033 875 0 2026-01-23 18:50:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-3-1-de7581f71a goldmane-666569f655-9pqgl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali515c466adc2 [] [] }} ContainerID="9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" Namespace="calico-system" Pod="goldmane-666569f655-9pqgl" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.241 [INFO][4627] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" Namespace="calico-system" Pod="goldmane-666569f655-9pqgl" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.297 [INFO][4664] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" HandleID="k8s-pod-network.9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" Workload="ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.297 [INFO][4664] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" HandleID="k8s-pod-network.9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" Workload="ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000315880), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-3-1-de7581f71a", "pod":"goldmane-666569f655-9pqgl", "timestamp":"2026-01-23 18:50:44.297460617 +0000 UTC"}, Hostname:"ci-4459-2-3-1-de7581f71a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.297 [INFO][4664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.429 [INFO][4664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.429 [INFO][4664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-de7581f71a' Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.490 [INFO][4664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.502 [INFO][4664] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.506 [INFO][4664] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.507 [INFO][4664] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.509 [INFO][4664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.509 [INFO][4664] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.510 [INFO][4664] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3 Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.516 [INFO][4664] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.524 [INFO][4664] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.72/26] block=192.168.116.64/26 handle="k8s-pod-network.9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.524 [INFO][4664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.72/26] handle="k8s-pod-network.9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.524 [INFO][4664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:44.557588 containerd[1630]: 2026-01-23 18:50:44.524 [INFO][4664] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.72/26] IPv6=[] ContainerID="9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" HandleID="k8s-pod-network.9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" Workload="ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0" Jan 23 18:50:44.558059 containerd[1630]: 2026-01-23 18:50:44.527 [INFO][4627] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" Namespace="calico-system" Pod="goldmane-666569f655-9pqgl" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"10031b0c-bbc2-4f8e-9df2-1b6971eda033", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"", Pod:"goldmane-666569f655-9pqgl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali515c466adc2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:44.558059 containerd[1630]: 2026-01-23 18:50:44.527 [INFO][4627] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.72/32] ContainerID="9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" Namespace="calico-system" Pod="goldmane-666569f655-9pqgl" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0" Jan 23 18:50:44.558059 containerd[1630]: 2026-01-23 18:50:44.527 [INFO][4627] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali515c466adc2 ContainerID="9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" Namespace="calico-system" Pod="goldmane-666569f655-9pqgl" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0" Jan 23 18:50:44.558059 containerd[1630]: 2026-01-23 18:50:44.534 [INFO][4627] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" Namespace="calico-system" Pod="goldmane-666569f655-9pqgl" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0" Jan 23 18:50:44.558059 containerd[1630]: 2026-01-23 18:50:44.535 [INFO][4627] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" Namespace="calico-system" Pod="goldmane-666569f655-9pqgl" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"10031b0c-bbc2-4f8e-9df2-1b6971eda033", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3", Pod:"goldmane-666569f655-9pqgl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.116.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali515c466adc2", MAC:"6a:bc:76:f2:72:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:44.558059 containerd[1630]: 2026-01-23 18:50:44.551 [INFO][4627] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" Namespace="calico-system" Pod="goldmane-666569f655-9pqgl" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-goldmane--666569f655--9pqgl-eth0" Jan 23 18:50:44.566205 containerd[1630]: time="2026-01-23T18:50:44.565968959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7f687658-l86zs,Uid:7596438c-5371-4696-b02d-1c3d820234e2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3c9e81d209cb20ee5d5cf93fe5b2f38382795fbdcf6103d2bd0ade673777dc14\"" Jan 23 18:50:44.585438 containerd[1630]: time="2026-01-23T18:50:44.585400321Z" level=info msg="connecting to shim 9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3" address="unix:///run/containerd/s/09b7e1f6f2e1ad4cf5c66111a3f5af27896c466a32b9fbece7bc052ad3f872d8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:44.607299 systemd[1]: Started cri-containerd-9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3.scope - libcontainer container 9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3. Jan 23 18:50:44.646796 containerd[1630]: time="2026-01-23T18:50:44.646764900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-9pqgl,Uid:10031b0c-bbc2-4f8e-9df2-1b6971eda033,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b852e87734b30e82a98910e5a498ed43d8e21c6ccc311ad9d5437e31c8674f3\"" Jan 23 18:50:44.866555 systemd-networkd[1515]: calia9fb91762d3: Gained IPv6LL Jan 23 18:50:44.915695 containerd[1630]: time="2026-01-23T18:50:44.915646056Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:44.917018 containerd[1630]: time="2026-01-23T18:50:44.916942461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:50:44.917018 containerd[1630]: time="2026-01-23T18:50:44.916974341Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:50:44.917509 kubelet[2789]: E0123 18:50:44.917459 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:50:44.917509 kubelet[2789]: E0123 18:50:44.917507 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:50:44.918698 containerd[1630]: time="2026-01-23T18:50:44.918660831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:50:44.918954 kubelet[2789]: E0123 18:50:44.918878 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvzz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hl4mg_calico-system(2a5fa6c0-649d-4612-b31e-23030250d313): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:45.058385 systemd-networkd[1515]: calibf2a5548101: Gained IPv6LL Jan 23 18:50:45.342593 kubelet[2789]: E0123 18:50:45.342485 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:50:45.354750 containerd[1630]: time="2026-01-23T18:50:45.354592519Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:45.367075 containerd[1630]: time="2026-01-23T18:50:45.366867675Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:50:45.367075 containerd[1630]: time="2026-01-23T18:50:45.367010267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:50:45.367405 kubelet[2789]: E0123 18:50:45.367211 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:45.367405 kubelet[2789]: E0123 18:50:45.367268 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:45.367700 kubelet[2789]: E0123 18:50:45.367592 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gx6dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b7f687658-l86zs_calico-apiserver(7596438c-5371-4696-b02d-1c3d820234e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:45.368324 containerd[1630]: time="2026-01-23T18:50:45.368277031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:50:45.369137 kubelet[2789]: E0123 18:50:45.369052 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:50:45.764890 systemd-networkd[1515]: cali29db79bee3e: Gained IPv6LL Jan 23 18:50:45.844409 containerd[1630]: time="2026-01-23T18:50:45.844315681Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:45.850963 containerd[1630]: time="2026-01-23T18:50:45.850853893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:50:45.851150 containerd[1630]: time="2026-01-23T18:50:45.850976934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:50:45.851327 kubelet[2789]: E0123 18:50:45.851258 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:50:45.851391 kubelet[2789]: E0123 18:50:45.851359 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:50:45.851995 containerd[1630]: time="2026-01-23T18:50:45.851811334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:50:45.852277 kubelet[2789]: E0123 18:50:45.851888 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z2s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9pqgl_calico-system(10031b0c-bbc2-4f8e-9df2-1b6971eda033): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:45.854157 kubelet[2789]: E0123 18:50:45.854101 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:50:46.018621 systemd-networkd[1515]: cali2085f637bc9: Gained IPv6LL Jan 23 18:50:46.141018 containerd[1630]: time="2026-01-23T18:50:46.140958107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7f687658-b9829,Uid:6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:50:46.274566 systemd-networkd[1515]: cali515c466adc2: Gained IPv6LL Jan 23 18:50:46.307249 systemd-networkd[1515]: cali0ae48d1bfb2: Link UP Jan 23 18:50:46.310588 systemd-networkd[1515]: cali0ae48d1bfb2: Gained carrier Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.200 [INFO][4861] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0 calico-apiserver-b7f687658- calico-apiserver 6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa 873 0 2026-01-23 18:50:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:b7f687658 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-3-1-de7581f71a calico-apiserver-b7f687658-b9829 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0ae48d1bfb2 [] [] }} ContainerID="86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-b9829" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.201 [INFO][4861] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-b9829" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.240 [INFO][4872] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" HandleID="k8s-pod-network.86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.241 [INFO][4872] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" HandleID="k8s-pod-network.86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf5a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-3-1-de7581f71a", "pod":"calico-apiserver-b7f687658-b9829", "timestamp":"2026-01-23 18:50:46.24097884 +0000 UTC"}, Hostname:"ci-4459-2-3-1-de7581f71a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.241 [INFO][4872] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.241 [INFO][4872] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.241 [INFO][4872] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-3-1-de7581f71a' Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.251 [INFO][4872] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.259 [INFO][4872] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.267 [INFO][4872] ipam/ipam.go 511: Trying affinity for 192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.270 [INFO][4872] ipam/ipam.go 158: Attempting to load block cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.273 [INFO][4872] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.116.64/26 host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.273 [INFO][4872] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.116.64/26 handle="k8s-pod-network.86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.278 [INFO][4872] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5 Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.285 [INFO][4872] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.116.64/26 handle="k8s-pod-network.86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.293 [INFO][4872] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.116.73/26] block=192.168.116.64/26 handle="k8s-pod-network.86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.294 [INFO][4872] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.116.73/26] handle="k8s-pod-network.86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" host="ci-4459-2-3-1-de7581f71a" Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.294 [INFO][4872] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:50:46.334906 containerd[1630]: 2026-01-23 18:50:46.294 [INFO][4872] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.116.73/26] IPv6=[] ContainerID="86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" HandleID="k8s-pod-network.86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" Workload="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0" Jan 23 18:50:46.335886 containerd[1630]: 2026-01-23 18:50:46.299 [INFO][4861] cni-plugin/k8s.go 418: Populated endpoint ContainerID="86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-b9829" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0", GenerateName:"calico-apiserver-b7f687658-", Namespace:"calico-apiserver", SelfLink:"", UID:"6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7f687658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"", Pod:"calico-apiserver-b7f687658-b9829", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ae48d1bfb2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:46.335886 containerd[1630]: 2026-01-23 18:50:46.300 [INFO][4861] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.116.73/32] ContainerID="86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-b9829" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0" Jan 23 18:50:46.335886 containerd[1630]: 2026-01-23 18:50:46.300 [INFO][4861] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ae48d1bfb2 ContainerID="86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-b9829" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0" Jan 23 18:50:46.335886 containerd[1630]: 2026-01-23 18:50:46.310 [INFO][4861] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-b9829" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0" Jan 23 18:50:46.335886 containerd[1630]: 2026-01-23 18:50:46.311 [INFO][4861] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-b9829" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0", GenerateName:"calico-apiserver-b7f687658-", Namespace:"calico-apiserver", SelfLink:"", UID:"6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 50, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"b7f687658", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-3-1-de7581f71a", ContainerID:"86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5", Pod:"calico-apiserver-b7f687658-b9829", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.116.73/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0ae48d1bfb2", MAC:"f6:57:2a:31:96:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:50:46.335886 containerd[1630]: 2026-01-23 18:50:46.326 [INFO][4861] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" Namespace="calico-apiserver" Pod="calico-apiserver-b7f687658-b9829" WorkloadEndpoint="ci--4459--2--3--1--de7581f71a-k8s-calico--apiserver--b7f687658--b9829-eth0" Jan 23 18:50:46.346771 kubelet[2789]: E0123 18:50:46.346542 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:50:46.349331 kubelet[2789]: E0123 18:50:46.348584 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:50:46.358284 containerd[1630]: time="2026-01-23T18:50:46.357682743Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:46.364254 containerd[1630]: time="2026-01-23T18:50:46.359020478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:50:46.364254 containerd[1630]: time="2026-01-23T18:50:46.359091590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:50:46.364385 kubelet[2789]: E0123 18:50:46.359950 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:50:46.364385 kubelet[2789]: E0123 18:50:46.360024 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:50:46.364385 kubelet[2789]: E0123 18:50:46.360133 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvzz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hl4mg_calico-system(2a5fa6c0-649d-4612-b31e-23030250d313): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:46.364385 kubelet[2789]: E0123 18:50:46.361361 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:50:46.378823 containerd[1630]: time="2026-01-23T18:50:46.378752322Z" level=info msg="connecting to shim 86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5" address="unix:///run/containerd/s/aeb1f4327fba11667ff1231355e65857849bc2e77fae93c9ab5139b8ce76e9f1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:50:46.430904 systemd[1]: Started cri-containerd-86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5.scope - libcontainer container 86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5. Jan 23 18:50:46.500271 containerd[1630]: time="2026-01-23T18:50:46.499899774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-b7f687658-b9829,Uid:6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"86c5cfef07d77acdb2b4e35f0f098969c0248d9489030521bb56dd25705155f5\"" Jan 23 18:50:46.505675 containerd[1630]: time="2026-01-23T18:50:46.505630596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:50:47.015910 containerd[1630]: time="2026-01-23T18:50:47.015773337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:47.017736 containerd[1630]: time="2026-01-23T18:50:47.017664396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:50:47.017829 containerd[1630]: time="2026-01-23T18:50:47.017763627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:50:47.018017 kubelet[2789]: E0123 18:50:47.017967 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:47.018132 kubelet[2789]: E0123 18:50:47.018028 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:47.018313 kubelet[2789]: E0123 18:50:47.018180 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfvpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b7f687658-b9829_calico-apiserver(6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:47.019982 kubelet[2789]: E0123 18:50:47.019357 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:50:47.349115 kubelet[2789]: E0123 18:50:47.348482 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:50:47.351415 kubelet[2789]: E0123 18:50:47.351353 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:50:47.682417 systemd-networkd[1515]: cali0ae48d1bfb2: Gained IPv6LL Jan 23 18:50:48.350428 kubelet[2789]: E0123 18:50:48.350341 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:50:53.144908 containerd[1630]: time="2026-01-23T18:50:53.144274184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:50:53.604045 containerd[1630]: time="2026-01-23T18:50:53.603959151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:53.608063 containerd[1630]: time="2026-01-23T18:50:53.607973737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:50:53.608212 containerd[1630]: time="2026-01-23T18:50:53.608072018Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:50:53.608328 kubelet[2789]: E0123 18:50:53.608259 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:50:53.608328 kubelet[2789]: E0123 18:50:53.608322 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:50:53.609013 kubelet[2789]: E0123 18:50:53.608462 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dcb1e343f7a944f4a9269ea774e00d00,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zm6zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d5d997cf-dvx46_calico-system(f81f9a64-9c09-4160-be5b-578a1ab2c98f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:53.611680 containerd[1630]: time="2026-01-23T18:50:53.611623611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:50:54.111819 containerd[1630]: time="2026-01-23T18:50:54.111749130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:54.113795 containerd[1630]: time="2026-01-23T18:50:54.113698558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:50:54.114351 containerd[1630]: time="2026-01-23T18:50:54.113756418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:50:54.114705 kubelet[2789]: E0123 18:50:54.114655 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:50:54.114879 kubelet[2789]: E0123 18:50:54.114854 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:50:54.115329 kubelet[2789]: E0123 18:50:54.115142 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm6zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d5d997cf-dvx46_calico-system(f81f9a64-9c09-4160-be5b-578a1ab2c98f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:54.117344 kubelet[2789]: E0123 18:50:54.117231 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:50:57.141675 containerd[1630]: time="2026-01-23T18:50:57.141376983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:50:57.586096 containerd[1630]: time="2026-01-23T18:50:57.586033910Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:57.587634 containerd[1630]: time="2026-01-23T18:50:57.587575604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:50:57.587634 containerd[1630]: time="2026-01-23T18:50:57.587673404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:50:57.587889 kubelet[2789]: E0123 18:50:57.587812 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:50:57.587889 kubelet[2789]: E0123 18:50:57.587868 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:50:57.588664 kubelet[2789]: E0123 18:50:57.588018 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtm9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68dbfc5dfc-dffpg_calico-system(c91754db-1ae1-406f-a4ae-966042e218eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:57.589642 kubelet[2789]: E0123 18:50:57.589477 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:50:58.142116 containerd[1630]: time="2026-01-23T18:50:58.142050079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:50:58.585314 containerd[1630]: time="2026-01-23T18:50:58.585232751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:58.587033 containerd[1630]: time="2026-01-23T18:50:58.586903814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:50:58.587213 containerd[1630]: time="2026-01-23T18:50:58.586968454Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:50:58.587861 kubelet[2789]: E0123 18:50:58.587513 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:50:58.587861 kubelet[2789]: E0123 18:50:58.587575 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:50:58.587861 kubelet[2789]: E0123 18:50:58.587737 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z2s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9pqgl_calico-system(10031b0c-bbc2-4f8e-9df2-1b6971eda033): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:58.589116 kubelet[2789]: E0123 18:50:58.589039 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:50:59.143221 containerd[1630]: time="2026-01-23T18:50:59.142672094Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:50:59.617170 containerd[1630]: time="2026-01-23T18:50:59.617065677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:50:59.618545 containerd[1630]: time="2026-01-23T18:50:59.618493798Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:50:59.618647 containerd[1630]: time="2026-01-23T18:50:59.618589839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:50:59.618828 kubelet[2789]: E0123 18:50:59.618773 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:59.619499 kubelet[2789]: E0123 18:50:59.618833 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:50:59.619499 kubelet[2789]: E0123 18:50:59.619004 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68gkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76b4596f9f-n7xpc_calico-apiserver(18c41309-8840-4c5c-a0e6-d6fb41f37c90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:50:59.620233 kubelet[2789]: E0123 18:50:59.620169 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:51:00.141016 containerd[1630]: time="2026-01-23T18:51:00.140964395Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:51:00.577078 containerd[1630]: time="2026-01-23T18:51:00.576908326Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:00.578528 containerd[1630]: time="2026-01-23T18:51:00.578450868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:51:00.578606 containerd[1630]: time="2026-01-23T18:51:00.578549568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:00.578826 kubelet[2789]: E0123 18:51:00.578774 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:00.579095 kubelet[2789]: E0123 18:51:00.578836 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:00.579331 containerd[1630]: time="2026-01-23T18:51:00.579151913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:51:00.580572 kubelet[2789]: E0123 18:51:00.580417 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gx6dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b7f687658-l86zs_calico-apiserver(7596438c-5371-4696-b02d-1c3d820234e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:00.582157 kubelet[2789]: E0123 18:51:00.582120 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:51:01.395793 containerd[1630]: time="2026-01-23T18:51:01.395729151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:01.397640 containerd[1630]: time="2026-01-23T18:51:01.397532214Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:51:01.397732 containerd[1630]: time="2026-01-23T18:51:01.397654456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:01.398005 kubelet[2789]: E0123 18:51:01.397938 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:01.398005 kubelet[2789]: E0123 18:51:01.397995 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:01.398729 containerd[1630]: time="2026-01-23T18:51:01.398440681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:51:01.399566 kubelet[2789]: E0123 18:51:01.399454 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfvpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b7f687658-b9829_calico-apiserver(6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:01.401461 kubelet[2789]: E0123 18:51:01.401412 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:51:01.828531 containerd[1630]: time="2026-01-23T18:51:01.828357761Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:01.830365 containerd[1630]: time="2026-01-23T18:51:01.830250116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:51:01.830563 containerd[1630]: time="2026-01-23T18:51:01.830373697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:51:01.830674 kubelet[2789]: E0123 18:51:01.830631 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:51:01.830775 kubelet[2789]: E0123 18:51:01.830692 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:51:01.830999 kubelet[2789]: E0123 18:51:01.830840 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvzz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hl4mg_calico-system(2a5fa6c0-649d-4612-b31e-23030250d313): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:01.834491 containerd[1630]: time="2026-01-23T18:51:01.834389727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:51:02.269390 containerd[1630]: time="2026-01-23T18:51:02.269298379Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:02.270972 containerd[1630]: time="2026-01-23T18:51:02.270824670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:51:02.270972 containerd[1630]: time="2026-01-23T18:51:02.270927340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:51:02.271373 kubelet[2789]: E0123 18:51:02.271143 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:51:02.271373 kubelet[2789]: E0123 18:51:02.271240 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:51:02.271550 kubelet[2789]: E0123 18:51:02.271406 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvzz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hl4mg_calico-system(2a5fa6c0-649d-4612-b31e-23030250d313): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:02.272980 kubelet[2789]: E0123 18:51:02.272856 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:51:07.144979 kubelet[2789]: E0123 18:51:07.144771 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:51:11.143458 kubelet[2789]: E0123 18:51:11.142094 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:51:12.141608 kubelet[2789]: E0123 18:51:12.141424 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:51:12.143131 kubelet[2789]: E0123 18:51:12.142288 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:51:13.144222 kubelet[2789]: E0123 18:51:13.143613 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:51:13.144222 kubelet[2789]: E0123 18:51:13.144107 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:51:14.145512 kubelet[2789]: E0123 18:51:14.145433 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:51:18.141451 containerd[1630]: time="2026-01-23T18:51:18.141396414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:51:18.600048 containerd[1630]: time="2026-01-23T18:51:18.599975919Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:18.601833 containerd[1630]: time="2026-01-23T18:51:18.601730328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:51:18.601943 containerd[1630]: time="2026-01-23T18:51:18.601866399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:51:18.602350 kubelet[2789]: E0123 18:51:18.602126 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:51:18.602350 kubelet[2789]: E0123 18:51:18.602274 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:51:18.602976 kubelet[2789]: E0123 18:51:18.602539 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dcb1e343f7a944f4a9269ea774e00d00,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zm6zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d5d997cf-dvx46_calico-system(f81f9a64-9c09-4160-be5b-578a1ab2c98f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:18.606281 containerd[1630]: time="2026-01-23T18:51:18.606235421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:51:19.277035 containerd[1630]: time="2026-01-23T18:51:19.276833782Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:19.278633 containerd[1630]: time="2026-01-23T18:51:19.278563450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:51:19.278855 containerd[1630]: time="2026-01-23T18:51:19.278728202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:51:19.279421 kubelet[2789]: E0123 18:51:19.279333 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:51:19.279630 kubelet[2789]: E0123 18:51:19.279568 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:51:19.279760 kubelet[2789]: E0123 18:51:19.279718 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm6zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d5d997cf-dvx46_calico-system(f81f9a64-9c09-4160-be5b-578a1ab2c98f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:19.281253 kubelet[2789]: E0123 18:51:19.281098 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:51:22.141306 containerd[1630]: time="2026-01-23T18:51:22.141267938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:51:22.587755 containerd[1630]: time="2026-01-23T18:51:22.587701010Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:22.588897 containerd[1630]: time="2026-01-23T18:51:22.588871226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:51:22.588955 containerd[1630]: time="2026-01-23T18:51:22.588929816Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:51:22.589082 kubelet[2789]: E0123 18:51:22.589045 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:51:22.589438 kubelet[2789]: E0123 18:51:22.589084 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:51:22.589438 kubelet[2789]: E0123 18:51:22.589203 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtm9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68dbfc5dfc-dffpg_calico-system(c91754db-1ae1-406f-a4ae-966042e218eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:22.590583 kubelet[2789]: E0123 18:51:22.590543 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:51:23.145017 containerd[1630]: time="2026-01-23T18:51:23.144174017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:51:23.606864 containerd[1630]: time="2026-01-23T18:51:23.606790467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:23.609231 containerd[1630]: time="2026-01-23T18:51:23.608289794Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:51:23.609231 containerd[1630]: time="2026-01-23T18:51:23.608411756Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:23.609402 kubelet[2789]: E0123 18:51:23.608855 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:23.609402 kubelet[2789]: E0123 18:51:23.608950 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:23.610029 kubelet[2789]: E0123 18:51:23.609268 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfvpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b7f687658-b9829_calico-apiserver(6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:23.612165 kubelet[2789]: E0123 18:51:23.612102 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:51:25.144530 containerd[1630]: time="2026-01-23T18:51:25.144402188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:51:25.621003 containerd[1630]: time="2026-01-23T18:51:25.620946606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:25.622408 containerd[1630]: time="2026-01-23T18:51:25.622381853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:51:25.622456 containerd[1630]: time="2026-01-23T18:51:25.622443943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:25.622623 kubelet[2789]: E0123 18:51:25.622578 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:51:25.623322 kubelet[2789]: E0123 18:51:25.622632 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:51:25.623322 kubelet[2789]: E0123 18:51:25.622731 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z2s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9pqgl_calico-system(10031b0c-bbc2-4f8e-9df2-1b6971eda033): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:25.624165 kubelet[2789]: E0123 18:51:25.624097 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:51:26.142535 containerd[1630]: time="2026-01-23T18:51:26.142465402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:51:26.581648 containerd[1630]: time="2026-01-23T18:51:26.581354621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:26.583462 containerd[1630]: time="2026-01-23T18:51:26.583091909Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:51:26.583462 containerd[1630]: time="2026-01-23T18:51:26.583169700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:26.583886 kubelet[2789]: E0123 18:51:26.583813 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:26.584007 kubelet[2789]: E0123 18:51:26.583898 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:26.584932 kubelet[2789]: E0123 18:51:26.584078 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68gkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76b4596f9f-n7xpc_calico-apiserver(18c41309-8840-4c5c-a0e6-d6fb41f37c90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:26.585335 kubelet[2789]: E0123 18:51:26.585261 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:51:27.143749 containerd[1630]: time="2026-01-23T18:51:27.143527249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:51:27.601255 containerd[1630]: time="2026-01-23T18:51:27.601117029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:27.602627 containerd[1630]: time="2026-01-23T18:51:27.602550055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:51:27.602627 containerd[1630]: time="2026-01-23T18:51:27.602610885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:51:27.602926 kubelet[2789]: E0123 18:51:27.602788 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:27.602926 kubelet[2789]: E0123 18:51:27.602822 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:51:27.603312 kubelet[2789]: E0123 18:51:27.603038 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gx6dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b7f687658-l86zs_calico-apiserver(7596438c-5371-4696-b02d-1c3d820234e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:27.603746 containerd[1630]: time="2026-01-23T18:51:27.603525770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:51:27.604876 kubelet[2789]: E0123 18:51:27.604845 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:51:28.049516 containerd[1630]: time="2026-01-23T18:51:28.048881851Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:28.050519 containerd[1630]: time="2026-01-23T18:51:28.050419627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:51:28.050613 containerd[1630]: time="2026-01-23T18:51:28.050544258Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:51:28.050881 kubelet[2789]: E0123 18:51:28.050817 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:51:28.051361 kubelet[2789]: E0123 18:51:28.050883 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:51:28.051361 kubelet[2789]: E0123 18:51:28.051065 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvzz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hl4mg_calico-system(2a5fa6c0-649d-4612-b31e-23030250d313): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:28.054461 containerd[1630]: time="2026-01-23T18:51:28.054394674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:51:28.558045 containerd[1630]: time="2026-01-23T18:51:28.557794491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:28.559580 containerd[1630]: time="2026-01-23T18:51:28.559439568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:51:28.559580 containerd[1630]: time="2026-01-23T18:51:28.559545408Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:51:28.560018 kubelet[2789]: E0123 18:51:28.559958 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:51:28.560227 kubelet[2789]: E0123 18:51:28.560152 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:51:28.561072 kubelet[2789]: E0123 18:51:28.561000 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvzz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hl4mg_calico-system(2a5fa6c0-649d-4612-b31e-23030250d313): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:28.562634 kubelet[2789]: E0123 18:51:28.562278 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:51:34.144635 kubelet[2789]: E0123 18:51:34.144467 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:51:35.148215 kubelet[2789]: E0123 18:51:35.145575 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:51:36.141073 kubelet[2789]: E0123 18:51:36.141031 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:51:36.213045 systemd[1]: Started sshd@7-89.167.0.15:22-20.161.92.111:50672.service - OpenSSH per-connection server daemon (20.161.92.111:50672). Jan 23 18:51:36.980563 sshd[5007]: Accepted publickey for core from 20.161.92.111 port 50672 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:51:36.982323 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:36.988865 systemd-logind[1606]: New session 8 of user core. Jan 23 18:51:36.993293 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:51:37.141715 kubelet[2789]: E0123 18:51:37.141305 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:51:37.661852 sshd[5010]: Connection closed by 20.161.92.111 port 50672 Jan 23 18:51:37.662537 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Jan 23 18:51:37.671600 systemd[1]: sshd@7-89.167.0.15:22-20.161.92.111:50672.service: Deactivated successfully. Jan 23 18:51:37.675877 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:51:37.678493 systemd-logind[1606]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:51:37.681844 systemd-logind[1606]: Removed session 8. Jan 23 18:51:40.140660 kubelet[2789]: E0123 18:51:40.140587 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:51:42.143120 kubelet[2789]: E0123 18:51:42.143053 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:51:42.799568 systemd[1]: Started sshd@8-89.167.0.15:22-20.161.92.111:52722.service - OpenSSH per-connection server daemon (20.161.92.111:52722). Jan 23 18:51:43.582555 sshd[5048]: Accepted publickey for core from 20.161.92.111 port 52722 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:51:43.587170 sshd-session[5048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:43.600303 systemd-logind[1606]: New session 9 of user core. Jan 23 18:51:43.603383 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:51:44.142577 kubelet[2789]: E0123 18:51:44.142502 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:51:44.231842 sshd[5051]: Connection closed by 20.161.92.111 port 52722 Jan 23 18:51:44.234687 sshd-session[5048]: pam_unix(sshd:session): session closed for user core Jan 23 18:51:44.241578 systemd-logind[1606]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:51:44.242875 systemd[1]: sshd@8-89.167.0.15:22-20.161.92.111:52722.service: Deactivated successfully. Jan 23 18:51:44.247602 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:51:44.251535 systemd-logind[1606]: Removed session 9. Jan 23 18:51:46.140691 kubelet[2789]: E0123 18:51:46.140449 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:51:47.145129 kubelet[2789]: E0123 18:51:47.143526 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:51:49.368296 systemd[1]: Started sshd@9-89.167.0.15:22-20.161.92.111:52736.service - OpenSSH per-connection server daemon (20.161.92.111:52736). Jan 23 18:51:50.127842 sshd[5064]: Accepted publickey for core from 20.161.92.111 port 52736 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:51:50.132595 sshd-session[5064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:50.149397 systemd-logind[1606]: New session 10 of user core. Jan 23 18:51:50.155752 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:51:50.785304 sshd[5067]: Connection closed by 20.161.92.111 port 52736 Jan 23 18:51:50.787204 sshd-session[5064]: pam_unix(sshd:session): session closed for user core Jan 23 18:51:50.793007 systemd[1]: sshd@9-89.167.0.15:22-20.161.92.111:52736.service: Deactivated successfully. Jan 23 18:51:50.796259 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:51:50.798490 systemd-logind[1606]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:51:50.799873 systemd-logind[1606]: Removed session 10. Jan 23 18:51:50.917359 systemd[1]: Started sshd@10-89.167.0.15:22-20.161.92.111:52738.service - OpenSSH per-connection server daemon (20.161.92.111:52738). Jan 23 18:51:51.142145 kubelet[2789]: E0123 18:51:51.141915 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:51:51.145080 kubelet[2789]: E0123 18:51:51.145021 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:51:51.680139 sshd[5084]: Accepted publickey for core from 20.161.92.111 port 52738 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:51:51.682824 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:51.691653 systemd-logind[1606]: New session 11 of user core. Jan 23 18:51:51.703378 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:51:52.389621 sshd[5087]: Connection closed by 20.161.92.111 port 52738 Jan 23 18:51:52.390607 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Jan 23 18:51:52.394743 systemd-logind[1606]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:51:52.395737 systemd[1]: sshd@10-89.167.0.15:22-20.161.92.111:52738.service: Deactivated successfully. Jan 23 18:51:52.398987 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:51:52.404883 systemd-logind[1606]: Removed session 11. Jan 23 18:51:52.527472 systemd[1]: Started sshd@11-89.167.0.15:22-20.161.92.111:50032.service - OpenSSH per-connection server daemon (20.161.92.111:50032). Jan 23 18:51:53.145508 kubelet[2789]: E0123 18:51:53.145433 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:51:53.147361 kubelet[2789]: E0123 18:51:53.147317 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:51:53.305157 sshd[5097]: Accepted publickey for core from 20.161.92.111 port 50032 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:51:53.307845 sshd-session[5097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:53.317720 systemd-logind[1606]: New session 12 of user core. Jan 23 18:51:53.322478 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:51:53.942979 sshd[5100]: Connection closed by 20.161.92.111 port 50032 Jan 23 18:51:53.944939 sshd-session[5097]: pam_unix(sshd:session): session closed for user core Jan 23 18:51:53.951695 systemd-logind[1606]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:51:53.953374 systemd[1]: sshd@11-89.167.0.15:22-20.161.92.111:50032.service: Deactivated successfully. Jan 23 18:51:53.959547 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:51:53.966291 systemd-logind[1606]: Removed session 12. Jan 23 18:51:55.144049 kubelet[2789]: E0123 18:51:55.143659 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:51:59.080646 systemd[1]: Started sshd@12-89.167.0.15:22-20.161.92.111:50034.service - OpenSSH per-connection server daemon (20.161.92.111:50034). Jan 23 18:51:59.141548 containerd[1630]: time="2026-01-23T18:51:59.141520660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:51:59.602501 containerd[1630]: time="2026-01-23T18:51:59.602409484Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:51:59.604105 containerd[1630]: time="2026-01-23T18:51:59.604050298Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:51:59.604292 containerd[1630]: time="2026-01-23T18:51:59.604165229Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:51:59.604654 kubelet[2789]: E0123 18:51:59.604588 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:51:59.604654 kubelet[2789]: E0123 18:51:59.604658 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:51:59.606000 kubelet[2789]: E0123 18:51:59.604829 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:dcb1e343f7a944f4a9269ea774e00d00,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zm6zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d5d997cf-dvx46_calico-system(f81f9a64-9c09-4160-be5b-578a1ab2c98f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:51:59.609211 containerd[1630]: time="2026-01-23T18:51:59.609066663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:51:59.852635 sshd[5113]: Accepted publickey for core from 20.161.92.111 port 50034 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:51:59.854072 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:51:59.857739 systemd-logind[1606]: New session 13 of user core. Jan 23 18:51:59.862302 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:52:00.053370 containerd[1630]: time="2026-01-23T18:52:00.053278507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:00.055038 containerd[1630]: time="2026-01-23T18:52:00.054876552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:52:00.055038 containerd[1630]: time="2026-01-23T18:52:00.054997022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:52:00.055433 kubelet[2789]: E0123 18:52:00.055365 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:52:00.055582 kubelet[2789]: E0123 18:52:00.055462 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:52:00.056166 kubelet[2789]: E0123 18:52:00.055739 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zm6zb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-54d5d997cf-dvx46_calico-system(f81f9a64-9c09-4160-be5b-578a1ab2c98f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:00.058077 kubelet[2789]: E0123 18:52:00.057940 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:52:00.494015 sshd[5118]: Connection closed by 20.161.92.111 port 50034 Jan 23 18:52:00.494362 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:00.498432 systemd[1]: sshd@12-89.167.0.15:22-20.161.92.111:50034.service: Deactivated successfully. Jan 23 18:52:00.500812 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:52:00.502038 systemd-logind[1606]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:52:00.504350 systemd-logind[1606]: Removed session 13. Jan 23 18:52:00.627355 systemd[1]: Started sshd@13-89.167.0.15:22-20.161.92.111:50042.service - OpenSSH per-connection server daemon (20.161.92.111:50042). Jan 23 18:52:01.142824 kubelet[2789]: E0123 18:52:01.142352 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:52:01.403067 sshd[5136]: Accepted publickey for core from 20.161.92.111 port 50042 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:01.405643 sshd-session[5136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:01.414956 systemd-logind[1606]: New session 14 of user core. Jan 23 18:52:01.421451 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:52:02.141779 kubelet[2789]: E0123 18:52:02.141665 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:52:02.179656 sshd[5139]: Connection closed by 20.161.92.111 port 50042 Jan 23 18:52:02.183335 sshd-session[5136]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:02.186631 systemd[1]: sshd@13-89.167.0.15:22-20.161.92.111:50042.service: Deactivated successfully. Jan 23 18:52:02.188920 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:52:02.191301 systemd-logind[1606]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:52:02.193933 systemd-logind[1606]: Removed session 14. Jan 23 18:52:02.317375 systemd[1]: Started sshd@14-89.167.0.15:22-20.161.92.111:50056.service - OpenSSH per-connection server daemon (20.161.92.111:50056). Jan 23 18:52:03.072559 sshd[5149]: Accepted publickey for core from 20.161.92.111 port 50056 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:03.075031 sshd-session[5149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:03.085839 systemd-logind[1606]: New session 15 of user core. Jan 23 18:52:03.098431 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:52:04.141455 containerd[1630]: time="2026-01-23T18:52:04.141411668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:52:04.257528 sshd[5152]: Connection closed by 20.161.92.111 port 50056 Jan 23 18:52:04.258363 sshd-session[5149]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:04.262346 systemd-logind[1606]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:52:04.262536 systemd[1]: sshd@14-89.167.0.15:22-20.161.92.111:50056.service: Deactivated successfully. Jan 23 18:52:04.264005 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:52:04.266050 systemd-logind[1606]: Removed session 15. Jan 23 18:52:04.400540 systemd[1]: Started sshd@15-89.167.0.15:22-20.161.92.111:45568.service - OpenSSH per-connection server daemon (20.161.92.111:45568). Jan 23 18:52:04.790158 containerd[1630]: time="2026-01-23T18:52:04.789618950Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:04.791271 containerd[1630]: time="2026-01-23T18:52:04.791157954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:52:04.791489 containerd[1630]: time="2026-01-23T18:52:04.791316156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:52:04.791728 kubelet[2789]: E0123 18:52:04.791647 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:52:04.791728 kubelet[2789]: E0123 18:52:04.791719 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:52:04.793157 kubelet[2789]: E0123 18:52:04.791879 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zfvpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b7f687658-b9829_calico-apiserver(6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:04.793590 kubelet[2789]: E0123 18:52:04.793524 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:52:05.144374 kubelet[2789]: E0123 18:52:05.144142 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:52:05.214388 sshd[5171]: Accepted publickey for core from 20.161.92.111 port 45568 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:05.217761 sshd-session[5171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:05.226909 systemd-logind[1606]: New session 16 of user core. Jan 23 18:52:05.234420 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:52:05.902326 sshd[5174]: Connection closed by 20.161.92.111 port 45568 Jan 23 18:52:05.904084 sshd-session[5171]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:05.911690 systemd[1]: sshd@15-89.167.0.15:22-20.161.92.111:45568.service: Deactivated successfully. Jan 23 18:52:05.916865 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:52:05.919616 systemd-logind[1606]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:52:05.922682 systemd-logind[1606]: Removed session 16. Jan 23 18:52:06.041570 systemd[1]: Started sshd@16-89.167.0.15:22-20.161.92.111:45574.service - OpenSSH per-connection server daemon (20.161.92.111:45574). Jan 23 18:52:06.821268 sshd[5187]: Accepted publickey for core from 20.161.92.111 port 45574 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:06.822163 sshd-session[5187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:06.830270 systemd-logind[1606]: New session 17 of user core. Jan 23 18:52:06.844365 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 18:52:07.142367 kubelet[2789]: E0123 18:52:07.141308 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:52:07.408557 sshd[5192]: Connection closed by 20.161.92.111 port 45574 Jan 23 18:52:07.411348 sshd-session[5187]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:07.418577 systemd[1]: sshd@16-89.167.0.15:22-20.161.92.111:45574.service: Deactivated successfully. Jan 23 18:52:07.424625 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 18:52:07.426920 systemd-logind[1606]: Session 17 logged out. Waiting for processes to exit. Jan 23 18:52:07.430430 systemd-logind[1606]: Removed session 17. Jan 23 18:52:09.144240 containerd[1630]: time="2026-01-23T18:52:09.142971711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:52:09.586680 containerd[1630]: time="2026-01-23T18:52:09.586606949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:09.588325 containerd[1630]: time="2026-01-23T18:52:09.588218104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:52:09.588454 containerd[1630]: time="2026-01-23T18:52:09.588255314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:52:09.588664 kubelet[2789]: E0123 18:52:09.588575 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:52:09.589236 kubelet[2789]: E0123 18:52:09.588661 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:52:09.589236 kubelet[2789]: E0123 18:52:09.588858 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvzz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hl4mg_calico-system(2a5fa6c0-649d-4612-b31e-23030250d313): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:09.591918 containerd[1630]: time="2026-01-23T18:52:09.591839314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:52:10.027395 containerd[1630]: time="2026-01-23T18:52:10.026728638Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:10.028382 containerd[1630]: time="2026-01-23T18:52:10.028300182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:52:10.028382 containerd[1630]: time="2026-01-23T18:52:10.028365192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:52:10.028698 kubelet[2789]: E0123 18:52:10.028621 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:52:10.028698 kubelet[2789]: E0123 18:52:10.028660 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:52:10.029022 kubelet[2789]: E0123 18:52:10.028991 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gvzz7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hl4mg_calico-system(2a5fa6c0-649d-4612-b31e-23030250d313): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:10.030253 kubelet[2789]: E0123 18:52:10.030207 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:52:12.549865 systemd[1]: Started sshd@17-89.167.0.15:22-20.161.92.111:38184.service - OpenSSH per-connection server daemon (20.161.92.111:38184). Jan 23 18:52:13.144397 containerd[1630]: time="2026-01-23T18:52:13.144311380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:52:13.356198 sshd[5230]: Accepted publickey for core from 20.161.92.111 port 38184 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:13.358756 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:13.368418 systemd-logind[1606]: New session 18 of user core. Jan 23 18:52:13.377625 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 18:52:13.581989 containerd[1630]: time="2026-01-23T18:52:13.581879341Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:13.583343 containerd[1630]: time="2026-01-23T18:52:13.583260174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:52:13.583598 kubelet[2789]: E0123 18:52:13.583545 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:52:13.583869 containerd[1630]: time="2026-01-23T18:52:13.583319614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:52:13.584622 kubelet[2789]: E0123 18:52:13.584086 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:52:13.584622 kubelet[2789]: E0123 18:52:13.584196 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xtm9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-68dbfc5dfc-dffpg_calico-system(c91754db-1ae1-406f-a4ae-966042e218eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:13.585903 kubelet[2789]: E0123 18:52:13.585859 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:52:13.931766 sshd[5255]: Connection closed by 20.161.92.111 port 38184 Jan 23 18:52:13.932351 sshd-session[5230]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:13.937402 systemd[1]: sshd@17-89.167.0.15:22-20.161.92.111:38184.service: Deactivated successfully. Jan 23 18:52:13.939649 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 18:52:13.941157 systemd-logind[1606]: Session 18 logged out. Waiting for processes to exit. Jan 23 18:52:13.942914 systemd-logind[1606]: Removed session 18. Jan 23 18:52:15.141745 kubelet[2789]: E0123 18:52:15.141698 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:52:17.144904 containerd[1630]: time="2026-01-23T18:52:17.144804104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:52:17.571075 containerd[1630]: time="2026-01-23T18:52:17.570719436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:17.572510 containerd[1630]: time="2026-01-23T18:52:17.572412950Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:52:17.572652 containerd[1630]: time="2026-01-23T18:52:17.572511390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:52:17.572749 kubelet[2789]: E0123 18:52:17.572698 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:52:17.573670 kubelet[2789]: E0123 18:52:17.572755 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:52:17.573670 kubelet[2789]: E0123 18:52:17.572914 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8z2s4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-9pqgl_calico-system(10031b0c-bbc2-4f8e-9df2-1b6971eda033): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:17.574953 kubelet[2789]: E0123 18:52:17.574870 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:52:18.142546 kubelet[2789]: E0123 18:52:18.142444 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:52:19.074364 systemd[1]: Started sshd@18-89.167.0.15:22-20.161.92.111:38186.service - OpenSSH per-connection server daemon (20.161.92.111:38186). Jan 23 18:52:19.143981 containerd[1630]: time="2026-01-23T18:52:19.143914367Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:52:19.578938 containerd[1630]: time="2026-01-23T18:52:19.578853769Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:19.580325 containerd[1630]: time="2026-01-23T18:52:19.580286262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:52:19.580468 containerd[1630]: time="2026-01-23T18:52:19.580374162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:52:19.580682 kubelet[2789]: E0123 18:52:19.580605 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:52:19.581818 kubelet[2789]: E0123 18:52:19.580691 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:52:19.581818 kubelet[2789]: E0123 18:52:19.580998 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gx6dt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-b7f687658-l86zs_calico-apiserver(7596438c-5371-4696-b02d-1c3d820234e2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:19.582112 containerd[1630]: time="2026-01-23T18:52:19.581233755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:52:19.583102 kubelet[2789]: E0123 18:52:19.583059 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:52:19.857209 sshd[5268]: Accepted publickey for core from 20.161.92.111 port 38186 ssh2: RSA SHA256:O+GrD1+S/PiyVvonHu9VtMwOp9GUWWLq8toHa2xZwQY Jan 23 18:52:19.857710 sshd-session[5268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:52:19.861463 systemd-logind[1606]: New session 19 of user core. Jan 23 18:52:19.868284 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 18:52:20.014263 containerd[1630]: time="2026-01-23T18:52:20.014229670Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:52:20.015794 containerd[1630]: time="2026-01-23T18:52:20.015756265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:52:20.015944 containerd[1630]: time="2026-01-23T18:52:20.015874565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:52:20.016300 kubelet[2789]: E0123 18:52:20.016146 2789 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:52:20.016300 kubelet[2789]: E0123 18:52:20.016260 2789 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:52:20.016672 kubelet[2789]: E0123 18:52:20.016587 2789 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-68gkg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76b4596f9f-n7xpc_calico-apiserver(18c41309-8840-4c5c-a0e6-d6fb41f37c90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:52:20.017860 kubelet[2789]: E0123 18:52:20.017806 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:52:20.463061 sshd[5271]: Connection closed by 20.161.92.111 port 38186 Jan 23 18:52:20.463583 sshd-session[5268]: pam_unix(sshd:session): session closed for user core Jan 23 18:52:20.466462 systemd[1]: sshd@18-89.167.0.15:22-20.161.92.111:38186.service: Deactivated successfully. Jan 23 18:52:20.468589 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 18:52:20.469940 systemd-logind[1606]: Session 19 logged out. Waiting for processes to exit. Jan 23 18:52:20.472989 systemd-logind[1606]: Removed session 19. Jan 23 18:52:22.140644 kubelet[2789]: E0123 18:52:22.140582 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:52:26.140429 kubelet[2789]: E0123 18:52:26.140385 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:52:27.144717 kubelet[2789]: E0123 18:52:27.144487 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:52:30.142243 kubelet[2789]: E0123 18:52:30.142118 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:52:30.143655 kubelet[2789]: E0123 18:52:30.143589 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:52:32.141164 kubelet[2789]: E0123 18:52:32.141060 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:52:33.141268 kubelet[2789]: E0123 18:52:33.141100 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:52:37.142403 kubelet[2789]: E0123 18:52:37.142175 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:52:37.143144 kubelet[2789]: E0123 18:52:37.142379 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:52:38.161308 systemd[1]: cri-containerd-f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0.scope: Deactivated successfully. Jan 23 18:52:38.162783 systemd[1]: cri-containerd-f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0.scope: Consumed 15.876s CPU time, 111.5M memory peak. Jan 23 18:52:38.166941 containerd[1630]: time="2026-01-23T18:52:38.166866989Z" level=info msg="received container exit event container_id:\"f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0\" id:\"f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0\" pid:3137 exit_status:1 exited_at:{seconds:1769194358 nanos:166017987}" Jan 23 18:52:38.208616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0-rootfs.mount: Deactivated successfully. Jan 23 18:52:38.600937 kubelet[2789]: E0123 18:52:38.600745 2789 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57776->10.0.0.2:2379: read: connection timed out" Jan 23 18:52:38.646621 kubelet[2789]: I0123 18:52:38.646570 2789 scope.go:117] "RemoveContainer" containerID="f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0" Jan 23 18:52:38.652231 containerd[1630]: time="2026-01-23T18:52:38.651435296Z" level=info msg="CreateContainer within sandbox \"243d5cc1287be399bb645e36b9297227d0103df20e2ded4a6da0e41b303c3177\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 18:52:38.664449 containerd[1630]: time="2026-01-23T18:52:38.664404266Z" level=info msg="Container 19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:38.678082 containerd[1630]: time="2026-01-23T18:52:38.677999908Z" level=info msg="CreateContainer within sandbox \"243d5cc1287be399bb645e36b9297227d0103df20e2ded4a6da0e41b303c3177\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef\"" Jan 23 18:52:38.679091 containerd[1630]: time="2026-01-23T18:52:38.679050190Z" level=info msg="StartContainer for \"19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef\"" Jan 23 18:52:38.680549 containerd[1630]: time="2026-01-23T18:52:38.680494793Z" level=info msg="connecting to shim 19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef" address="unix:///run/containerd/s/8296365632cf01a2f90a51743872576d49e9025a8507b3ae716b1c30cbb6b16c" protocol=ttrpc version=3 Jan 23 18:52:38.715408 systemd[1]: Started cri-containerd-19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef.scope - libcontainer container 19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef. Jan 23 18:52:38.774414 containerd[1630]: time="2026-01-23T18:52:38.774333352Z" level=info msg="StartContainer for \"19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef\" returns successfully" Jan 23 18:52:39.141977 kubelet[2789]: E0123 18:52:39.141727 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:52:39.342866 systemd[1]: cri-containerd-77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4.scope: Deactivated successfully. Jan 23 18:52:39.343468 systemd[1]: cri-containerd-77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4.scope: Consumed 2.938s CPU time, 59M memory peak, 64K read from disk. Jan 23 18:52:39.350513 containerd[1630]: time="2026-01-23T18:52:39.350394360Z" level=info msg="received container exit event container_id:\"77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4\" id:\"77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4\" pid:2648 exit_status:1 exited_at:{seconds:1769194359 nanos:348758106}" Jan 23 18:52:39.412902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4-rootfs.mount: Deactivated successfully. Jan 23 18:52:39.656510 kubelet[2789]: I0123 18:52:39.656230 2789 scope.go:117] "RemoveContainer" containerID="77df008c01588ad3b350c551169e8ec91e204f3fb4f5a014d48c012129a0f6c4" Jan 23 18:52:39.659506 containerd[1630]: time="2026-01-23T18:52:39.659413436Z" level=info msg="CreateContainer within sandbox \"d98ef84b90105b75ade80b0a2caa033de1b91c4f9f8c594f7650803bbaf236bc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 18:52:39.685044 containerd[1630]: time="2026-01-23T18:52:39.684837055Z" level=info msg="Container ea6b185f48e2dcd7fd19cd0b4ca488b5bcca1040283fff94e6d003b1cd067c5b: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:39.697580 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034669203.mount: Deactivated successfully. Jan 23 18:52:39.706955 containerd[1630]: time="2026-01-23T18:52:39.706859045Z" level=info msg="CreateContainer within sandbox \"d98ef84b90105b75ade80b0a2caa033de1b91c4f9f8c594f7650803bbaf236bc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ea6b185f48e2dcd7fd19cd0b4ca488b5bcca1040283fff94e6d003b1cd067c5b\"" Jan 23 18:52:39.707944 containerd[1630]: time="2026-01-23T18:52:39.707896558Z" level=info msg="StartContainer for \"ea6b185f48e2dcd7fd19cd0b4ca488b5bcca1040283fff94e6d003b1cd067c5b\"" Jan 23 18:52:39.710159 containerd[1630]: time="2026-01-23T18:52:39.710122163Z" level=info msg="connecting to shim ea6b185f48e2dcd7fd19cd0b4ca488b5bcca1040283fff94e6d003b1cd067c5b" address="unix:///run/containerd/s/f18f62548c96c603771666361a6339ac8cfe8a3b98bd1ef9792bd16e201ad4f4" protocol=ttrpc version=3 Jan 23 18:52:39.753568 systemd[1]: Started cri-containerd-ea6b185f48e2dcd7fd19cd0b4ca488b5bcca1040283fff94e6d003b1cd067c5b.scope - libcontainer container ea6b185f48e2dcd7fd19cd0b4ca488b5bcca1040283fff94e6d003b1cd067c5b. Jan 23 18:52:39.857336 containerd[1630]: time="2026-01-23T18:52:39.857292965Z" level=info msg="StartContainer for \"ea6b185f48e2dcd7fd19cd0b4ca488b5bcca1040283fff94e6d003b1cd067c5b\" returns successfully" Jan 23 18:52:40.584810 kubelet[2789]: I0123 18:52:40.584077 2789 status_manager.go:895] "Failed to get status for pod" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57692->10.0.0.2:2379: read: connection timed out" Jan 23 18:52:40.585563 kubelet[2789]: E0123 18:52:40.585387 2789 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:57590->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-b7f687658-b9829.188d70cb5bdd9e80 calico-apiserver 1628 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-b7f687658-b9829,UID:6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa,APIVersion:v1,ResourceVersion:859,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4459-2-3-1-de7581f71a,},FirstTimestamp:2026-01-23 18:50:47 +0000 UTC,LastTimestamp:2026-01-23 18:52:30.142056323 +0000 UTC m=+151.078000522,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-3-1-de7581f71a,}" Jan 23 18:52:43.144503 kubelet[2789]: E0123 18:52:43.144385 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90" Jan 23 18:52:44.059270 systemd[1]: cri-containerd-51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d.scope: Deactivated successfully. Jan 23 18:52:44.060460 systemd[1]: cri-containerd-51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d.scope: Consumed 1.992s CPU time, 22M memory peak, 168K read from disk. Jan 23 18:52:44.064896 containerd[1630]: time="2026-01-23T18:52:44.064848719Z" level=info msg="received container exit event container_id:\"51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d\" id:\"51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d\" pid:2614 exit_status:1 exited_at:{seconds:1769194364 nanos:64428349}" Jan 23 18:52:44.105970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d-rootfs.mount: Deactivated successfully. Jan 23 18:52:44.141469 kubelet[2789]: E0123 18:52:44.141385 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:52:44.674087 kubelet[2789]: I0123 18:52:44.674030 2789 scope.go:117] "RemoveContainer" containerID="51ccf2dea249fc9c4725976a68108f01c9b8c1f0fd9052629c48f6229458252d" Jan 23 18:52:44.677221 containerd[1630]: time="2026-01-23T18:52:44.676712276Z" level=info msg="CreateContainer within sandbox \"fe1a81e76f89d3015f416a33bac027f842d7efdbec236f2a4e8f5d0392320082\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 18:52:44.690517 containerd[1630]: time="2026-01-23T18:52:44.690469797Z" level=info msg="Container 11a51b9fbfe97e2d28725f50f1a83f9801d6c1e6a56c4910ba6a70c48dfc0337: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:52:44.704894 containerd[1630]: time="2026-01-23T18:52:44.704832800Z" level=info msg="CreateContainer within sandbox \"fe1a81e76f89d3015f416a33bac027f842d7efdbec236f2a4e8f5d0392320082\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"11a51b9fbfe97e2d28725f50f1a83f9801d6c1e6a56c4910ba6a70c48dfc0337\"" Jan 23 18:52:44.705547 containerd[1630]: time="2026-01-23T18:52:44.705490382Z" level=info msg="StartContainer for \"11a51b9fbfe97e2d28725f50f1a83f9801d6c1e6a56c4910ba6a70c48dfc0337\"" Jan 23 18:52:44.707002 containerd[1630]: time="2026-01-23T18:52:44.706950875Z" level=info msg="connecting to shim 11a51b9fbfe97e2d28725f50f1a83f9801d6c1e6a56c4910ba6a70c48dfc0337" address="unix:///run/containerd/s/e2d9535b681d7a4f9a185b4b6c19df623e82b964f44c394478f6961f0579e5b4" protocol=ttrpc version=3 Jan 23 18:52:44.739428 systemd[1]: Started cri-containerd-11a51b9fbfe97e2d28725f50f1a83f9801d6c1e6a56c4910ba6a70c48dfc0337.scope - libcontainer container 11a51b9fbfe97e2d28725f50f1a83f9801d6c1e6a56c4910ba6a70c48dfc0337. Jan 23 18:52:44.840390 containerd[1630]: time="2026-01-23T18:52:44.840347329Z" level=info msg="StartContainer for \"11a51b9fbfe97e2d28725f50f1a83f9801d6c1e6a56c4910ba6a70c48dfc0337\" returns successfully" Jan 23 18:52:45.142736 kubelet[2789]: E0123 18:52:45.142644 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-b9829" podUID="6cd6b32a-ecb3-4f1c-b9d5-5bdeab914efa" Jan 23 18:52:47.145559 kubelet[2789]: E0123 18:52:47.145484 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-b7f687658-l86zs" podUID="7596438c-5371-4696-b02d-1c3d820234e2" Jan 23 18:52:48.602209 kubelet[2789]: E0123 18:52:48.602103 2789 controller.go:195] "Failed to update lease" err="Put \"https://89.167.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-3-1-de7581f71a?timeout=10s\": context deadline exceeded" Jan 23 18:52:49.140639 kubelet[2789]: E0123 18:52:49.140592 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-68dbfc5dfc-dffpg" podUID="c91754db-1ae1-406f-a4ae-966042e218eb" Jan 23 18:52:50.015641 systemd[1]: cri-containerd-19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef.scope: Deactivated successfully. Jan 23 18:52:50.018175 containerd[1630]: time="2026-01-23T18:52:50.016673129Z" level=info msg="received container exit event container_id:\"19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef\" id:\"19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef\" pid:5308 exit_status:1 exited_at:{seconds:1769194370 nanos:16412037}" Jan 23 18:52:50.063052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef-rootfs.mount: Deactivated successfully. Jan 23 18:52:50.692386 kubelet[2789]: I0123 18:52:50.692275 2789 scope.go:117] "RemoveContainer" containerID="f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0" Jan 23 18:52:50.693226 kubelet[2789]: I0123 18:52:50.692456 2789 scope.go:117] "RemoveContainer" containerID="19731a5ea1ed1851cc8f36d580da5c5de47d8785e88aca89235dd1741776f8ef" Jan 23 18:52:50.693226 kubelet[2789]: E0123 18:52:50.692560 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-crghv_tigera-operator(a7606db7-4374-48f1-bff6-9b54df03699b)\"" pod="tigera-operator/tigera-operator-7dcd859c48-crghv" podUID="a7606db7-4374-48f1-bff6-9b54df03699b" Jan 23 18:52:50.694979 containerd[1630]: time="2026-01-23T18:52:50.694930673Z" level=info msg="RemoveContainer for \"f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0\"" Jan 23 18:52:50.699613 containerd[1630]: time="2026-01-23T18:52:50.699593173Z" level=info msg="RemoveContainer for \"f29492214e4bc011aeb005d0faad7129ce10f0eb6a267e280928391eb6a4dca0\" returns successfully" Jan 23 18:52:52.142003 kubelet[2789]: E0123 18:52:52.141920 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hl4mg" podUID="2a5fa6c0-649d-4612-b31e-23030250d313" Jan 23 18:52:53.145219 kubelet[2789]: E0123 18:52:53.143227 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-54d5d997cf-dvx46" podUID="f81f9a64-9c09-4160-be5b-578a1ab2c98f" Jan 23 18:52:55.141793 kubelet[2789]: E0123 18:52:55.141692 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-9pqgl" podUID="10031b0c-bbc2-4f8e-9df2-1b6971eda033" Jan 23 18:52:55.142860 kubelet[2789]: E0123 18:52:55.142567 2789 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76b4596f9f-n7xpc" podUID="18c41309-8840-4c5c-a0e6-d6fb41f37c90"