Jan 23 01:00:00.868433 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Jan 22 22:22:03 -00 2026 Jan 23 01:00:00.868452 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:00:00.868459 kernel: BIOS-provided physical RAM map: Jan 23 01:00:00.868464 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:00:00.868471 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ed3efff] usable Jan 23 01:00:00.868475 kernel: BIOS-e820: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 23 01:00:00.868481 kernel: BIOS-e820: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 23 01:00:00.868485 kernel: BIOS-e820: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 01:00:00.868490 kernel: BIOS-e820: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 01:00:00.868495 kernel: BIOS-e820: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 01:00:00.868500 kernel: BIOS-e820: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 23 01:00:00.868504 kernel: BIOS-e820: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 23 01:00:00.868509 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 01:00:00.868516 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:00:00.868521 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 01:00:00.868526 kernel: BIOS-e820: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 23 01:00:00.868531 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 01:00:00.868536 kernel: NX (Execute Disable) protection: active Jan 23 01:00:00.868545 kernel: APIC: Static calls initialized Jan 23 01:00:00.868553 kernel: e820: update [mem 0x7dfae018-0x7dfb7a57] usable ==> usable Jan 23 01:00:00.868564 kernel: e820: update [mem 0x7df72018-0x7dfad657] usable ==> usable Jan 23 01:00:00.868577 kernel: e820: update [mem 0x7df36018-0x7df71657] usable ==> usable Jan 23 01:00:00.868590 kernel: extended physical RAM map: Jan 23 01:00:00.868599 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 01:00:00.868607 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000007df36017] usable Jan 23 01:00:00.868614 kernel: reserve setup_data: [mem 0x000000007df36018-0x000000007df71657] usable Jan 23 01:00:00.868620 kernel: reserve setup_data: [mem 0x000000007df71658-0x000000007df72017] usable Jan 23 01:00:00.868631 kernel: reserve setup_data: [mem 0x000000007df72018-0x000000007dfad657] usable Jan 23 01:00:00.868641 kernel: reserve setup_data: [mem 0x000000007dfad658-0x000000007dfae017] usable Jan 23 01:00:00.868652 kernel: reserve setup_data: [mem 0x000000007dfae018-0x000000007dfb7a57] usable Jan 23 01:00:00.868659 kernel: reserve setup_data: [mem 0x000000007dfb7a58-0x000000007ed3efff] usable Jan 23 01:00:00.868667 kernel: reserve setup_data: [mem 0x000000007ed3f000-0x000000007edfffff] reserved Jan 23 01:00:00.868679 kernel: reserve setup_data: [mem 0x000000007ee00000-0x000000007f8ecfff] usable Jan 23 01:00:00.868687 kernel: reserve setup_data: [mem 0x000000007f8ed000-0x000000007fb6cfff] reserved Jan 23 01:00:00.868695 kernel: reserve setup_data: [mem 0x000000007fb6d000-0x000000007fb7efff] ACPI data Jan 23 01:00:00.868702 kernel: reserve setup_data: [mem 0x000000007fb7f000-0x000000007fbfefff] ACPI NVS Jan 23 01:00:00.868709 kernel: reserve setup_data: [mem 0x000000007fbff000-0x000000007ff7bfff] usable Jan 23 01:00:00.868714 kernel: reserve setup_data: [mem 0x000000007ff7c000-0x000000007fffffff] reserved Jan 23 01:00:00.868719 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jan 23 01:00:00.868724 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 23 01:00:00.868736 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Jan 23 01:00:00.868741 kernel: reserve setup_data: [mem 0x0000000100000000-0x0000000179ffffff] usable Jan 23 01:00:00.868762 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jan 23 01:00:00.868768 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 23 01:00:00.868773 kernel: efi: SMBIOS=0x7f988000 SMBIOS 3.0=0x7f986000 ACPI=0x7fb7e000 ACPI 2.0=0x7fb7e014 MEMATTR=0x7e845198 RNG=0x7fb73018 Jan 23 01:00:00.868791 kernel: random: crng init done Jan 23 01:00:00.868796 kernel: efi: Remove mem136: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Jan 23 01:00:00.868801 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Jan 23 01:00:00.868806 kernel: secureboot: Secure boot disabled Jan 23 01:00:00.868815 kernel: SMBIOS 3.0.0 present. Jan 23 01:00:00.868820 kernel: DMI: Hetzner vServer/Standard PC (Q35 + ICH9, 2009), BIOS 20171111 11/11/2017 Jan 23 01:00:00.868825 kernel: DMI: Memory slots populated: 1/1 Jan 23 01:00:00.868830 kernel: Hypervisor detected: KVM Jan 23 01:00:00.868835 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 23 01:00:00.868840 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 23 01:00:00.868845 kernel: kvm-clock: using sched offset of 13816684747 cycles Jan 23 01:00:00.868853 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 23 01:00:00.868859 kernel: tsc: Detected 2399.998 MHz processor Jan 23 01:00:00.868864 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 01:00:00.868870 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 01:00:00.868875 kernel: last_pfn = 0x17a000 max_arch_pfn = 0x10000000000 Jan 23 01:00:00.868881 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 01:00:00.868886 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 01:00:00.868891 kernel: last_pfn = 0x7ff7c max_arch_pfn = 0x10000000000 Jan 23 01:00:00.868897 kernel: Using GB pages for direct mapping Jan 23 01:00:00.868904 kernel: ACPI: Early table checksum verification disabled Jan 23 01:00:00.868909 kernel: ACPI: RSDP 0x000000007FB7E014 000024 (v02 BOCHS ) Jan 23 01:00:00.868915 kernel: ACPI: XSDT 0x000000007FB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jan 23 01:00:00.868920 kernel: ACPI: FACP 0x000000007FB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:00:00.868925 kernel: ACPI: DSDT 0x000000007FB7A000 002443 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:00:00.868931 kernel: ACPI: FACS 0x000000007FBDD000 000040 Jan 23 01:00:00.868936 kernel: ACPI: APIC 0x000000007FB78000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:00:00.868941 kernel: ACPI: HPET 0x000000007FB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:00:00.868946 kernel: ACPI: MCFG 0x000000007FB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:00:00.868954 kernel: ACPI: WAET 0x000000007FB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 23 01:00:00.868959 kernel: ACPI: BGRT 0x000000007FB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jan 23 01:00:00.868965 kernel: ACPI: Reserving FACP table memory at [mem 0x7fb79000-0x7fb790f3] Jan 23 01:00:00.868970 kernel: ACPI: Reserving DSDT table memory at [mem 0x7fb7a000-0x7fb7c442] Jan 23 01:00:00.868975 kernel: ACPI: Reserving FACS table memory at [mem 0x7fbdd000-0x7fbdd03f] Jan 23 01:00:00.868980 kernel: ACPI: Reserving APIC table memory at [mem 0x7fb78000-0x7fb7807f] Jan 23 01:00:00.868986 kernel: ACPI: Reserving HPET table memory at [mem 0x7fb77000-0x7fb77037] Jan 23 01:00:00.868991 kernel: ACPI: Reserving MCFG table memory at [mem 0x7fb76000-0x7fb7603b] Jan 23 01:00:00.868996 kernel: ACPI: Reserving WAET table memory at [mem 0x7fb75000-0x7fb75027] Jan 23 01:00:00.869003 kernel: ACPI: Reserving BGRT table memory at [mem 0x7fb74000-0x7fb74037] Jan 23 01:00:00.869009 kernel: No NUMA configuration found Jan 23 01:00:00.869014 kernel: Faking a node at [mem 0x0000000000000000-0x0000000179ffffff] Jan 23 01:00:00.869019 kernel: NODE_DATA(0) allocated [mem 0x179ff6dc0-0x179ffdfff] Jan 23 01:00:00.869025 kernel: Zone ranges: Jan 23 01:00:00.869030 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 01:00:00.869035 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 01:00:00.869040 kernel: Normal [mem 0x0000000100000000-0x0000000179ffffff] Jan 23 01:00:00.869046 kernel: Device empty Jan 23 01:00:00.869051 kernel: Movable zone start for each node Jan 23 01:00:00.869058 kernel: Early memory node ranges Jan 23 01:00:00.869064 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 01:00:00.869069 kernel: node 0: [mem 0x0000000000100000-0x000000007ed3efff] Jan 23 01:00:00.869074 kernel: node 0: [mem 0x000000007ee00000-0x000000007f8ecfff] Jan 23 01:00:00.869079 kernel: node 0: [mem 0x000000007fbff000-0x000000007ff7bfff] Jan 23 01:00:00.869084 kernel: node 0: [mem 0x0000000100000000-0x0000000179ffffff] Jan 23 01:00:00.869089 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x0000000179ffffff] Jan 23 01:00:00.869095 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 01:00:00.869100 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 01:00:00.869108 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Jan 23 01:00:00.869113 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jan 23 01:00:00.869118 kernel: On node 0, zone Normal: 132 pages in unavailable ranges Jan 23 01:00:00.869124 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 23 01:00:00.869129 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 23 01:00:00.869134 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 23 01:00:00.869139 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 01:00:00.869144 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 23 01:00:00.869150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 23 01:00:00.869157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 01:00:00.869162 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 23 01:00:00.869167 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 23 01:00:00.869173 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 01:00:00.869178 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 23 01:00:00.869183 kernel: CPU topo: Max. logical packages: 1 Jan 23 01:00:00.869189 kernel: CPU topo: Max. logical dies: 1 Jan 23 01:00:00.869202 kernel: CPU topo: Max. dies per package: 1 Jan 23 01:00:00.869208 kernel: CPU topo: Max. threads per core: 1 Jan 23 01:00:00.869213 kernel: CPU topo: Num. cores per package: 2 Jan 23 01:00:00.869219 kernel: CPU topo: Num. threads per package: 2 Jan 23 01:00:00.869224 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 01:00:00.869229 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 23 01:00:00.869237 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jan 23 01:00:00.869242 kernel: Booting paravirtualized kernel on KVM Jan 23 01:00:00.869248 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 01:00:00.869254 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 01:00:00.869261 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 01:00:00.869267 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 01:00:00.869272 kernel: pcpu-alloc: [0] 0 1 Jan 23 01:00:00.869278 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 23 01:00:00.869284 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:00:00.869289 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 01:00:00.869295 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 01:00:00.869300 kernel: Fallback order for Node 0: 0 Jan 23 01:00:00.869306 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1022792 Jan 23 01:00:00.869313 kernel: Policy zone: Normal Jan 23 01:00:00.869319 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 01:00:00.869324 kernel: software IO TLB: area num 2. Jan 23 01:00:00.869330 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 01:00:00.869335 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 01:00:00.869341 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 01:00:00.869346 kernel: Dynamic Preempt: voluntary Jan 23 01:00:00.869351 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 01:00:00.869361 kernel: rcu: RCU event tracing is enabled. Jan 23 01:00:00.869369 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 01:00:00.869374 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 01:00:00.869380 kernel: Rude variant of Tasks RCU enabled. Jan 23 01:00:00.869385 kernel: Tracing variant of Tasks RCU enabled. Jan 23 01:00:00.869391 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 01:00:00.869396 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 01:00:00.869402 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:00:00.869407 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:00:00.869413 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 01:00:00.869420 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 23 01:00:00.869426 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 01:00:00.869431 kernel: Console: colour dummy device 80x25 Jan 23 01:00:00.869437 kernel: printk: legacy console [tty0] enabled Jan 23 01:00:00.869442 kernel: printk: legacy console [ttyS0] enabled Jan 23 01:00:00.869447 kernel: ACPI: Core revision 20240827 Jan 23 01:00:00.869453 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 23 01:00:00.869458 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 01:00:00.869464 kernel: x2apic enabled Jan 23 01:00:00.869471 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 01:00:00.869477 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 23 01:00:00.869482 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x229835b7123, max_idle_ns: 440795242976 ns Jan 23 01:00:00.869488 kernel: Calibrating delay loop (skipped) preset value.. 4799.99 BogoMIPS (lpj=2399998) Jan 23 01:00:00.869494 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 01:00:00.869499 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jan 23 01:00:00.869505 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jan 23 01:00:00.869510 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 01:00:00.869516 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 23 01:00:00.869523 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 23 01:00:00.869529 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 23 01:00:00.869537 kernel: active return thunk: srso_alias_return_thunk Jan 23 01:00:00.869545 kernel: Speculative Return Stack Overflow: Mitigation: Safe RET Jan 23 01:00:00.869557 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Jan 23 01:00:00.869570 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Jan 23 01:00:00.869584 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 01:00:00.869594 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 01:00:00.869600 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 01:00:00.869609 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 01:00:00.869614 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 01:00:00.869620 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 01:00:00.869626 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jan 23 01:00:00.869631 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 01:00:00.869637 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 01:00:00.869642 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 01:00:00.869648 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 01:00:00.869653 kernel: x86/fpu: xstate_offset[9]: 2432, xstate_sizes[9]: 8 Jan 23 01:00:00.869661 kernel: x86/fpu: Enabled xstate features 0x2e7, context size is 2440 bytes, using 'compacted' format. Jan 23 01:00:00.869668 kernel: Freeing SMP alternatives memory: 32K Jan 23 01:00:00.869677 kernel: pid_max: default: 32768 minimum: 301 Jan 23 01:00:00.869689 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 01:00:00.869699 kernel: landlock: Up and running. Jan 23 01:00:00.869712 kernel: SELinux: Initializing. Jan 23 01:00:00.869721 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:00:00.869728 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 01:00:00.869734 kernel: smpboot: CPU0: AMD EPYC-Genoa Processor (family: 0x19, model: 0x11, stepping: 0x0) Jan 23 01:00:00.869756 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jan 23 01:00:00.869762 kernel: ... version: 0 Jan 23 01:00:00.869767 kernel: ... bit width: 48 Jan 23 01:00:00.869773 kernel: ... generic registers: 6 Jan 23 01:00:00.869779 kernel: ... value mask: 0000ffffffffffff Jan 23 01:00:00.869792 kernel: ... max period: 00007fffffffffff Jan 23 01:00:00.869797 kernel: ... fixed-purpose events: 0 Jan 23 01:00:00.869803 kernel: ... event mask: 000000000000003f Jan 23 01:00:00.869808 kernel: signal: max sigframe size: 3376 Jan 23 01:00:00.869817 kernel: rcu: Hierarchical SRCU implementation. Jan 23 01:00:00.869823 kernel: rcu: Max phase no-delay instances is 400. Jan 23 01:00:00.869828 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 01:00:00.869834 kernel: smp: Bringing up secondary CPUs ... Jan 23 01:00:00.869839 kernel: smpboot: x86: Booting SMP configuration: Jan 23 01:00:00.869844 kernel: .... node #0, CPUs: #1 Jan 23 01:00:00.869850 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 01:00:00.869855 kernel: smpboot: Total of 2 processors activated (9599.99 BogoMIPS) Jan 23 01:00:00.869861 kernel: Memory: 3848512K/4091168K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46196K init, 2564K bss, 237024K reserved, 0K cma-reserved) Jan 23 01:00:00.869869 kernel: devtmpfs: initialized Jan 23 01:00:00.869874 kernel: x86/mm: Memory block size: 128MB Jan 23 01:00:00.869880 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7fb7f000-0x7fbfefff] (524288 bytes) Jan 23 01:00:00.869886 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 01:00:00.869891 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 01:00:00.869897 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 01:00:00.869902 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 01:00:00.869908 kernel: audit: initializing netlink subsys (disabled) Jan 23 01:00:00.869913 kernel: audit: type=2000 audit(1769129997.634:1): state=initialized audit_enabled=0 res=1 Jan 23 01:00:00.869921 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 01:00:00.869927 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 01:00:00.869932 kernel: cpuidle: using governor menu Jan 23 01:00:00.869937 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 01:00:00.869943 kernel: dca service started, version 1.12.1 Jan 23 01:00:00.869949 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jan 23 01:00:00.869954 kernel: PCI: Using configuration type 1 for base access Jan 23 01:00:00.869960 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 01:00:00.869965 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 01:00:00.869973 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 01:00:00.869979 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 01:00:00.869984 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 01:00:00.869990 kernel: ACPI: Added _OSI(Module Device) Jan 23 01:00:00.869995 kernel: ACPI: Added _OSI(Processor Device) Jan 23 01:00:00.870001 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 01:00:00.870006 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 01:00:00.870012 kernel: ACPI: Interpreter enabled Jan 23 01:00:00.870017 kernel: ACPI: PM: (supports S0 S5) Jan 23 01:00:00.870024 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 01:00:00.870030 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 01:00:00.870036 kernel: PCI: Using E820 reservations for host bridge windows Jan 23 01:00:00.870041 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jan 23 01:00:00.870047 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 23 01:00:00.870204 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 01:00:00.870308 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jan 23 01:00:00.870408 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jan 23 01:00:00.870415 kernel: PCI host bridge to bus 0000:00 Jan 23 01:00:00.870549 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 23 01:00:00.870687 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 23 01:00:00.870854 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 23 01:00:00.870953 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xdfffffff window] Jan 23 01:00:00.871041 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jan 23 01:00:00.871133 kernel: pci_bus 0000:00: root bus resource [mem 0xc000000000-0xc7ffffffff window] Jan 23 01:00:00.871220 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 23 01:00:00.871338 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jan 23 01:00:00.871446 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint Jan 23 01:00:00.871548 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80000000-0x807fffff pref] Jan 23 01:00:00.871690 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc060500000-0xc060503fff 64bit pref] Jan 23 01:00:00.871837 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8138a000-0x8138afff] Jan 23 01:00:00.871997 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jan 23 01:00:00.872104 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 23 01:00:00.872212 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:00:00.872309 kernel: pci 0000:00:02.0: BAR 0 [mem 0x81389000-0x81389fff] Jan 23 01:00:00.872404 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 01:00:00.872499 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 23 01:00:00.872680 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 23 01:00:00.873274 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:00:00.873382 kernel: pci 0000:00:02.1: BAR 0 [mem 0x81388000-0x81388fff] Jan 23 01:00:00.873479 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 01:00:00.873610 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 23 01:00:00.873772 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:00:00.873886 kernel: pci 0000:00:02.2: BAR 0 [mem 0x81387000-0x81387fff] Jan 23 01:00:00.873987 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 01:00:00.874083 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 23 01:00:00.874178 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 23 01:00:00.874279 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:00:00.874375 kernel: pci 0000:00:02.3: BAR 0 [mem 0x81386000-0x81386fff] Jan 23 01:00:00.874471 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 01:00:00.874592 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 23 01:00:00.874757 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:00:00.874877 kernel: pci 0000:00:02.4: BAR 0 [mem 0x81385000-0x81385fff] Jan 23 01:00:00.874974 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 01:00:00.875069 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 23 01:00:00.875165 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 23 01:00:00.875275 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:00:00.875372 kernel: pci 0000:00:02.5: BAR 0 [mem 0x81384000-0x81384fff] Jan 23 01:00:00.875471 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 01:00:00.875588 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 23 01:00:00.875737 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 23 01:00:00.876203 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:00:00.876314 kernel: pci 0000:00:02.6: BAR 0 [mem 0x81383000-0x81383fff] Jan 23 01:00:00.876411 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 01:00:00.876506 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 01:00:00.876645 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 23 01:00:00.877832 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:00:00.877993 kernel: pci 0000:00:02.7: BAR 0 [mem 0x81382000-0x81382fff] Jan 23 01:00:00.878130 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 01:00:00.878243 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 01:00:00.878340 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 23 01:00:00.878444 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 23 01:00:00.878578 kernel: pci 0000:00:03.0: BAR 0 [mem 0x81381000-0x81381fff] Jan 23 01:00:00.878734 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 01:00:00.879910 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 23 01:00:00.880016 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 23 01:00:00.880125 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jan 23 01:00:00.880222 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jan 23 01:00:00.880327 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jan 23 01:00:00.880423 kernel: pci 0000:00:1f.2: BAR 4 [io 0x6040-0x605f] Jan 23 01:00:00.880518 kernel: pci 0000:00:1f.2: BAR 5 [mem 0x81380000-0x81380fff] Jan 23 01:00:00.880704 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jan 23 01:00:00.882372 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6000-0x603f] Jan 23 01:00:00.882494 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 01:00:00.882643 kernel: pci 0000:01:00.0: BAR 1 [mem 0x81200000-0x81200fff] Jan 23 01:00:00.882898 kernel: pci 0000:01:00.0: BAR 4 [mem 0xc060000000-0xc060003fff 64bit pref] Jan 23 01:00:00.883125 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 01:00:00.883324 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 01:00:00.883558 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 23 01:00:00.883714 kernel: pci 0000:02:00.0: BAR 0 [mem 0x81100000-0x81103fff 64bit] Jan 23 01:00:00.885071 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 01:00:00.885192 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 23 01:00:00.885301 kernel: pci 0000:03:00.0: BAR 1 [mem 0x81000000-0x81000fff] Jan 23 01:00:00.885404 kernel: pci 0000:03:00.0: BAR 4 [mem 0xc060100000-0xc060103fff 64bit pref] Jan 23 01:00:00.885501 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 01:00:00.885653 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 23 01:00:00.888826 kernel: pci 0000:04:00.0: BAR 4 [mem 0xc060200000-0xc060203fff 64bit pref] Jan 23 01:00:00.888941 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 01:00:00.889054 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 23 01:00:00.889162 kernel: pci 0000:05:00.0: BAR 1 [mem 0x80f00000-0x80f00fff] Jan 23 01:00:00.889263 kernel: pci 0000:05:00.0: BAR 4 [mem 0xc060300000-0xc060303fff 64bit pref] Jan 23 01:00:00.889362 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 01:00:00.889469 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 23 01:00:00.889610 kernel: pci 0000:06:00.0: BAR 1 [mem 0x80e00000-0x80e00fff] Jan 23 01:00:00.889791 kernel: pci 0000:06:00.0: BAR 4 [mem 0xc060400000-0xc060403fff 64bit pref] Jan 23 01:00:00.889972 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 01:00:00.889984 kernel: acpiphp: Slot [0] registered Jan 23 01:00:00.890103 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 23 01:00:00.890207 kernel: pci 0000:07:00.0: BAR 1 [mem 0x80c00000-0x80c00fff] Jan 23 01:00:00.890307 kernel: pci 0000:07:00.0: BAR 4 [mem 0xc000000000-0xc000003fff 64bit pref] Jan 23 01:00:00.890408 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 23 01:00:00.890571 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 01:00:00.890584 kernel: acpiphp: Slot [0-2] registered Jan 23 01:00:00.890698 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 01:00:00.890705 kernel: acpiphp: Slot [0-3] registered Jan 23 01:00:00.890873 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 01:00:00.890895 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 23 01:00:00.890914 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 23 01:00:00.890922 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 23 01:00:00.890928 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 23 01:00:00.890934 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jan 23 01:00:00.890942 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jan 23 01:00:00.890948 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jan 23 01:00:00.890954 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jan 23 01:00:00.890960 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jan 23 01:00:00.890965 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jan 23 01:00:00.890971 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jan 23 01:00:00.890977 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jan 23 01:00:00.890983 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jan 23 01:00:00.890989 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jan 23 01:00:00.890996 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jan 23 01:00:00.891005 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jan 23 01:00:00.891010 kernel: iommu: Default domain type: Translated Jan 23 01:00:00.891016 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 01:00:00.891022 kernel: efivars: Registered efivars operations Jan 23 01:00:00.891030 kernel: PCI: Using ACPI for IRQ routing Jan 23 01:00:00.891035 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 23 01:00:00.891041 kernel: e820: reserve RAM buffer [mem 0x7df36018-0x7fffffff] Jan 23 01:00:00.891047 kernel: e820: reserve RAM buffer [mem 0x7df72018-0x7fffffff] Jan 23 01:00:00.891053 kernel: e820: reserve RAM buffer [mem 0x7dfae018-0x7fffffff] Jan 23 01:00:00.891058 kernel: e820: reserve RAM buffer [mem 0x7ed3f000-0x7fffffff] Jan 23 01:00:00.891064 kernel: e820: reserve RAM buffer [mem 0x7f8ed000-0x7fffffff] Jan 23 01:00:00.891070 kernel: e820: reserve RAM buffer [mem 0x7ff7c000-0x7fffffff] Jan 23 01:00:00.891076 kernel: e820: reserve RAM buffer [mem 0x17a000000-0x17bffffff] Jan 23 01:00:00.891180 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jan 23 01:00:00.891277 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jan 23 01:00:00.891373 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 23 01:00:00.891380 kernel: vgaarb: loaded Jan 23 01:00:00.891386 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 23 01:00:00.891392 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 23 01:00:00.891397 kernel: clocksource: Switched to clocksource kvm-clock Jan 23 01:00:00.891403 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 01:00:00.891409 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 01:00:00.891418 kernel: pnp: PnP ACPI init Jan 23 01:00:00.891524 kernel: system 00:04: [mem 0xe0000000-0xefffffff window] has been reserved Jan 23 01:00:00.891537 kernel: pnp: PnP ACPI: found 5 devices Jan 23 01:00:00.891551 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 01:00:00.891566 kernel: NET: Registered PF_INET protocol family Jan 23 01:00:00.891578 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 01:00:00.891588 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 01:00:00.891599 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 01:00:00.891616 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 01:00:00.891623 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 01:00:00.891629 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 01:00:00.891635 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:00:00.891641 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 01:00:00.891647 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 01:00:00.891653 kernel: NET: Registered PF_XDP protocol family Jan 23 01:00:00.891815 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 01:00:00.891925 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref]: can't claim; no compatible bridge window Jan 23 01:00:00.892027 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 23 01:00:00.892124 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 23 01:00:00.892220 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 23 01:00:00.892318 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff]: assigned Jan 23 01:00:00.892413 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff]: assigned Jan 23 01:00:00.892510 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff]: assigned Jan 23 01:00:00.892658 kernel: pci 0000:01:00.0: ROM [mem 0x81280000-0x812fffff pref]: assigned Jan 23 01:00:00.894442 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 23 01:00:00.894581 kernel: pci 0000:00:02.0: bridge window [mem 0x81200000-0x812fffff] Jan 23 01:00:00.894700 kernel: pci 0000:00:02.0: bridge window [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 23 01:00:00.894828 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 23 01:00:00.894926 kernel: pci 0000:00:02.1: bridge window [mem 0x81100000-0x811fffff] Jan 23 01:00:00.895023 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 23 01:00:00.895119 kernel: pci 0000:00:02.2: bridge window [mem 0x81000000-0x810fffff] Jan 23 01:00:00.895214 kernel: pci 0000:00:02.2: bridge window [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 23 01:00:00.895310 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 23 01:00:00.895411 kernel: pci 0000:00:02.3: bridge window [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 23 01:00:00.895506 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 23 01:00:00.895641 kernel: pci 0000:00:02.4: bridge window [mem 0x80f00000-0x80ffffff] Jan 23 01:00:00.896772 kernel: pci 0000:00:02.4: bridge window [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 23 01:00:00.896907 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 23 01:00:00.897009 kernel: pci 0000:00:02.5: bridge window [mem 0x80e00000-0x80efffff] Jan 23 01:00:00.897105 kernel: pci 0000:00:02.5: bridge window [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 23 01:00:00.897207 kernel: pci 0000:07:00.0: ROM [mem 0x80c80000-0x80cfffff pref]: assigned Jan 23 01:00:00.897305 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 23 01:00:00.897405 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x1fff] Jan 23 01:00:00.897500 kernel: pci 0000:00:02.6: bridge window [mem 0x80c00000-0x80dfffff] Jan 23 01:00:00.897656 kernel: pci 0000:00:02.6: bridge window [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 23 01:00:00.898529 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 23 01:00:00.898703 kernel: pci 0000:00:02.7: bridge window [io 0x2000-0x2fff] Jan 23 01:00:00.898869 kernel: pci 0000:00:02.7: bridge window [mem 0x80a00000-0x80bfffff] Jan 23 01:00:00.899008 kernel: pci 0000:00:02.7: bridge window [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 23 01:00:00.899166 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 23 01:00:00.899309 kernel: pci 0000:00:03.0: bridge window [io 0x3000-0x3fff] Jan 23 01:00:00.899478 kernel: pci 0000:00:03.0: bridge window [mem 0x80800000-0x809fffff] Jan 23 01:00:00.899634 kernel: pci 0000:00:03.0: bridge window [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 23 01:00:00.900829 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 23 01:00:00.900933 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 23 01:00:00.901028 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 23 01:00:00.901122 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xdfffffff window] Jan 23 01:00:00.901212 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jan 23 01:00:00.901301 kernel: pci_bus 0000:00: resource 9 [mem 0xc000000000-0xc7ffffffff window] Jan 23 01:00:00.901402 kernel: pci_bus 0000:01: resource 1 [mem 0x81200000-0x812fffff] Jan 23 01:00:00.901497 kernel: pci_bus 0000:01: resource 2 [mem 0xc060000000-0xc0600fffff 64bit pref] Jan 23 01:00:00.901637 kernel: pci_bus 0000:02: resource 1 [mem 0x81100000-0x811fffff] Jan 23 01:00:00.903045 kernel: pci_bus 0000:03: resource 1 [mem 0x81000000-0x810fffff] Jan 23 01:00:00.903162 kernel: pci_bus 0000:03: resource 2 [mem 0xc060100000-0xc0601fffff 64bit pref] Jan 23 01:00:00.903265 kernel: pci_bus 0000:04: resource 2 [mem 0xc060200000-0xc0602fffff 64bit pref] Jan 23 01:00:00.903365 kernel: pci_bus 0000:05: resource 1 [mem 0x80f00000-0x80ffffff] Jan 23 01:00:00.903459 kernel: pci_bus 0000:05: resource 2 [mem 0xc060300000-0xc0603fffff 64bit pref] Jan 23 01:00:00.903594 kernel: pci_bus 0000:06: resource 1 [mem 0x80e00000-0x80efffff] Jan 23 01:00:00.903760 kernel: pci_bus 0000:06: resource 2 [mem 0xc060400000-0xc0604fffff 64bit pref] Jan 23 01:00:00.903882 kernel: pci_bus 0000:07: resource 0 [io 0x1000-0x1fff] Jan 23 01:00:00.903976 kernel: pci_bus 0000:07: resource 1 [mem 0x80c00000-0x80dfffff] Jan 23 01:00:00.904069 kernel: pci_bus 0000:07: resource 2 [mem 0xc000000000-0xc01fffffff 64bit pref] Jan 23 01:00:00.904167 kernel: pci_bus 0000:08: resource 0 [io 0x2000-0x2fff] Jan 23 01:00:00.904259 kernel: pci_bus 0000:08: resource 1 [mem 0x80a00000-0x80bfffff] Jan 23 01:00:00.904351 kernel: pci_bus 0000:08: resource 2 [mem 0xc020000000-0xc03fffffff 64bit pref] Jan 23 01:00:00.904450 kernel: pci_bus 0000:09: resource 0 [io 0x3000-0x3fff] Jan 23 01:00:00.904547 kernel: pci_bus 0000:09: resource 1 [mem 0x80800000-0x809fffff] Jan 23 01:00:00.904716 kernel: pci_bus 0000:09: resource 2 [mem 0xc040000000-0xc05fffffff 64bit pref] Jan 23 01:00:00.904729 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jan 23 01:00:00.904736 kernel: PCI: CLS 0 bytes, default 64 Jan 23 01:00:00.904755 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 01:00:00.904761 kernel: software IO TLB: mapped [mem 0x0000000077ffd000-0x000000007bffd000] (64MB) Jan 23 01:00:00.904768 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x229835b7123, max_idle_ns: 440795242976 ns Jan 23 01:00:00.904774 kernel: Initialise system trusted keyrings Jan 23 01:00:00.904792 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 01:00:00.904798 kernel: Key type asymmetric registered Jan 23 01:00:00.904804 kernel: Asymmetric key parser 'x509' registered Jan 23 01:00:00.904812 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 01:00:00.904817 kernel: io scheduler mq-deadline registered Jan 23 01:00:00.904823 kernel: io scheduler kyber registered Jan 23 01:00:00.904829 kernel: io scheduler bfq registered Jan 23 01:00:00.904935 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jan 23 01:00:00.905036 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jan 23 01:00:00.905136 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jan 23 01:00:00.905233 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jan 23 01:00:00.905330 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jan 23 01:00:00.905430 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jan 23 01:00:00.905528 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jan 23 01:00:00.905679 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jan 23 01:00:00.905807 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jan 23 01:00:00.905905 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jan 23 01:00:00.906030 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jan 23 01:00:00.906130 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jan 23 01:00:00.906226 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jan 23 01:00:00.906323 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jan 23 01:00:00.906418 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jan 23 01:00:00.906513 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jan 23 01:00:00.906525 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jan 23 01:00:00.906663 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 32 Jan 23 01:00:00.907834 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 32 Jan 23 01:00:00.907853 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 01:00:00.907864 kernel: ACPI: \_SB_.GSIF: Enabled at IRQ 21 Jan 23 01:00:00.907873 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 01:00:00.907881 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 01:00:00.907890 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 23 01:00:00.907905 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 23 01:00:00.907918 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 23 01:00:00.908067 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 23 01:00:00.908086 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 23 01:00:00.908198 kernel: rtc_cmos 00:03: registered as rtc0 Jan 23 01:00:00.908294 kernel: rtc_cmos 00:03: setting system clock to 2026-01-23T01:00:00 UTC (1769130000) Jan 23 01:00:00.908387 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jan 23 01:00:00.908395 kernel: amd_pstate: The CPPC feature is supported but currently disabled by the BIOS. Please enable it if your BIOS has the CPPC option. Jan 23 01:00:00.908406 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jan 23 01:00:00.908412 kernel: efifb: probing for efifb Jan 23 01:00:00.908418 kernel: efifb: framebuffer at 0x80000000, using 4000k, total 4000k Jan 23 01:00:00.908424 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jan 23 01:00:00.908429 kernel: efifb: scrolling: redraw Jan 23 01:00:00.908435 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 01:00:00.908441 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 01:00:00.908447 kernel: fb0: EFI VGA frame buffer device Jan 23 01:00:00.908455 kernel: pstore: Using crash dump compression: deflate Jan 23 01:00:00.908460 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 01:00:00.908466 kernel: NET: Registered PF_INET6 protocol family Jan 23 01:00:00.908472 kernel: Segment Routing with IPv6 Jan 23 01:00:00.908478 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 01:00:00.908484 kernel: NET: Registered PF_PACKET protocol family Jan 23 01:00:00.908489 kernel: Key type dns_resolver registered Jan 23 01:00:00.908495 kernel: IPI shorthand broadcast: enabled Jan 23 01:00:00.908509 kernel: sched_clock: Marking stable (2877007514, 239539754)->(3137391073, -20843805) Jan 23 01:00:00.908518 kernel: registered taskstats version 1 Jan 23 01:00:00.908536 kernel: Loading compiled-in X.509 certificates Jan 23 01:00:00.908551 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ed54f39d0282729985c39b8ffa9938cacff38d8a' Jan 23 01:00:00.908563 kernel: Demotion targets for Node 0: null Jan 23 01:00:00.908576 kernel: Key type .fscrypt registered Jan 23 01:00:00.908587 kernel: Key type fscrypt-provisioning registered Jan 23 01:00:00.908593 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 01:00:00.908599 kernel: ima: Allocated hash algorithm: sha1 Jan 23 01:00:00.908605 kernel: ima: No architecture policies found Jan 23 01:00:00.908615 kernel: clk: Disabling unused clocks Jan 23 01:00:00.908621 kernel: Warning: unable to open an initial console. Jan 23 01:00:00.908627 kernel: Freeing unused kernel image (initmem) memory: 46196K Jan 23 01:00:00.908633 kernel: Write protecting the kernel read-only data: 40960k Jan 23 01:00:00.908639 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 01:00:00.908644 kernel: Run /init as init process Jan 23 01:00:00.908650 kernel: with arguments: Jan 23 01:00:00.908656 kernel: /init Jan 23 01:00:00.908662 kernel: with environment: Jan 23 01:00:00.908670 kernel: HOME=/ Jan 23 01:00:00.908675 kernel: TERM=linux Jan 23 01:00:00.908685 systemd[1]: Successfully made /usr/ read-only. Jan 23 01:00:00.908701 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:00:00.908717 systemd[1]: Detected virtualization kvm. Jan 23 01:00:00.908728 systemd[1]: Detected architecture x86-64. Jan 23 01:00:00.908737 systemd[1]: Running in initrd. Jan 23 01:00:00.908757 systemd[1]: No hostname configured, using default hostname. Jan 23 01:00:00.908767 systemd[1]: Hostname set to . Jan 23 01:00:00.908773 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:00:00.908779 systemd[1]: Queued start job for default target initrd.target. Jan 23 01:00:00.908794 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:00:00.908800 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:00:00.908807 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 01:00:00.908814 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:00:00.908822 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 01:00:00.908839 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 01:00:00.908853 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 01:00:00.908868 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 01:00:00.908876 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:00:00.908886 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:00:00.908900 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:00:00.908912 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:00:00.908930 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:00:00.908937 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:00:00.908944 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:00:00.908950 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:00:00.908956 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 01:00:00.908962 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 01:00:00.908968 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:00:00.908974 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:00:00.908988 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:00:00.909001 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:00:00.909013 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 01:00:00.909021 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:00:00.909030 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 01:00:00.909045 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 01:00:00.909053 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 01:00:00.909059 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:00:00.909075 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:00:00.909085 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:00:00.909092 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 01:00:00.909132 systemd-journald[197]: Collecting audit messages is disabled. Jan 23 01:00:00.909163 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:00:00.909171 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 01:00:00.909178 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:00:00.909184 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 01:00:00.909190 kernel: Bridge firewalling registered Jan 23 01:00:00.909203 systemd-journald[197]: Journal started Jan 23 01:00:00.909225 systemd-journald[197]: Runtime Journal (/run/log/journal/e3b69e7928c3420e8a6b43e0c2b0dce4) is 8M, max 76.1M, 68.1M free. Jan 23 01:00:00.910237 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:00:00.875921 systemd-modules-load[198]: Inserted module 'overlay' Jan 23 01:00:00.909091 systemd-modules-load[198]: Inserted module 'br_netfilter' Jan 23 01:00:00.916620 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:00:00.917823 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:00:00.919092 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:00:00.922347 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 01:00:00.923872 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:00:00.929195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:00:00.932760 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:00:00.942451 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:00:00.946260 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:00:00.949848 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 01:00:00.951020 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 01:00:00.956872 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:00:00.957477 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:00:00.960707 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:00:00.969988 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=hetzner verity.usrhash=e8d7116310bea9a494780b8becdce41e7cc03ed509d8e2363e08981a47b3edc6 Jan 23 01:00:01.000679 systemd-resolved[237]: Positive Trust Anchors: Jan 23 01:00:01.001571 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:00:01.001608 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:00:01.005953 systemd-resolved[237]: Defaulting to hostname 'linux'. Jan 23 01:00:01.007141 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:00:01.008927 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:00:01.047796 kernel: SCSI subsystem initialized Jan 23 01:00:01.055773 kernel: Loading iSCSI transport class v2.0-870. Jan 23 01:00:01.065780 kernel: iscsi: registered transport (tcp) Jan 23 01:00:01.082976 kernel: iscsi: registered transport (qla4xxx) Jan 23 01:00:01.083033 kernel: QLogic iSCSI HBA Driver Jan 23 01:00:01.101413 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:00:01.114301 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:00:01.116384 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:00:01.160434 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 01:00:01.162328 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 01:00:01.215820 kernel: raid6: avx512x4 gen() 44235 MB/s Jan 23 01:00:01.233841 kernel: raid6: avx512x2 gen() 44985 MB/s Jan 23 01:00:01.251884 kernel: raid6: avx512x1 gen() 42821 MB/s Jan 23 01:00:01.269851 kernel: raid6: avx2x4 gen() 44280 MB/s Jan 23 01:00:01.287792 kernel: raid6: avx2x2 gen() 48396 MB/s Jan 23 01:00:01.306514 kernel: raid6: avx2x1 gen() 36821 MB/s Jan 23 01:00:01.306582 kernel: raid6: using algorithm avx2x2 gen() 48396 MB/s Jan 23 01:00:01.325545 kernel: raid6: .... xor() 36932 MB/s, rmw enabled Jan 23 01:00:01.325659 kernel: raid6: using avx512x2 recovery algorithm Jan 23 01:00:01.341816 kernel: xor: automatically using best checksumming function avx Jan 23 01:00:01.544779 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 01:00:01.553119 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:00:01.554646 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:00:01.581174 systemd-udevd[446]: Using default interface naming scheme 'v255'. Jan 23 01:00:01.586491 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:00:01.591822 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 01:00:01.612606 dracut-pre-trigger[458]: rd.md=0: removing MD RAID activation Jan 23 01:00:01.648367 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:00:01.650129 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:00:01.768741 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:00:01.774940 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 01:00:01.897829 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 01:00:01.905763 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 23 01:00:01.907435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:00:01.907526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:00:01.908814 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:00:01.910206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:00:01.912338 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:00:01.923480 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:00:01.929230 kernel: scsi host0: Virtio SCSI HBA Jan 23 01:00:01.923570 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:00:01.933200 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:00:01.938883 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 23 01:00:01.944455 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jan 23 01:00:01.944481 kernel: AES CTR mode by8 optimization enabled Jan 23 01:00:01.972783 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:00:01.979807 kernel: libata version 3.00 loaded. Jan 23 01:00:01.984373 kernel: ACPI: bus type USB registered Jan 23 01:00:01.984408 kernel: usbcore: registered new interface driver usbfs Jan 23 01:00:01.988777 kernel: usbcore: registered new interface driver hub Jan 23 01:00:01.992786 kernel: usbcore: registered new device driver usb Jan 23 01:00:02.000385 kernel: sd 0:0:0:0: Power-on or device reset occurred Jan 23 01:00:02.002767 kernel: sd 0:0:0:0: [sda] 160006144 512-byte logical blocks: (81.9 GB/76.3 GiB) Jan 23 01:00:02.008772 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 23 01:00:02.009479 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Jan 23 01:00:02.009668 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 23 01:00:02.020316 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 01:00:02.020347 kernel: GPT:17805311 != 160006143 Jan 23 01:00:02.020362 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 01:00:02.021905 kernel: GPT:17805311 != 160006143 Jan 23 01:00:02.023016 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 01:00:02.024427 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:00:02.027483 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 23 01:00:02.029977 kernel: ahci 0000:00:1f.2: version 3.0 Jan 23 01:00:02.034762 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 01:00:02.034941 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jan 23 01:00:02.034951 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jan 23 01:00:02.039477 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 23 01:00:02.039638 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jan 23 01:00:02.039773 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 23 01:00:02.039920 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jan 23 01:00:02.046364 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 23 01:00:02.046519 kernel: scsi host1: ahci Jan 23 01:00:02.047862 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 23 01:00:02.050225 kernel: scsi host2: ahci Jan 23 01:00:02.050257 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 23 01:00:02.052534 kernel: hub 1-0:1.0: USB hub found Jan 23 01:00:02.052724 kernel: scsi host3: ahci Jan 23 01:00:02.054757 kernel: hub 1-0:1.0: 4 ports detected Jan 23 01:00:02.054922 kernel: scsi host4: ahci Jan 23 01:00:02.055625 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 23 01:00:02.055666 kernel: scsi host5: ahci Jan 23 01:00:02.057762 kernel: hub 2-0:1.0: USB hub found Jan 23 01:00:02.057944 kernel: scsi host6: ahci Jan 23 01:00:02.058074 kernel: hub 2-0:1.0: 4 ports detected Jan 23 01:00:02.058193 kernel: ata1: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380100 irq 48 lpm-pol 1 Jan 23 01:00:02.068479 kernel: ata2: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380180 irq 48 lpm-pol 1 Jan 23 01:00:02.068502 kernel: ata3: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380200 irq 48 lpm-pol 1 Jan 23 01:00:02.070933 kernel: ata4: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380280 irq 48 lpm-pol 1 Jan 23 01:00:02.077840 kernel: ata5: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380300 irq 48 lpm-pol 1 Jan 23 01:00:02.077864 kernel: ata6: SATA max UDMA/133 abar m4096@0x81380000 port 0x81380380 irq 48 lpm-pol 1 Jan 23 01:00:02.101440 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 23 01:00:02.112572 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 23 01:00:02.118119 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Jan 23 01:00:02.118463 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 23 01:00:02.125350 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 01:00:02.127053 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 01:00:02.148547 disk-uuid[646]: Primary Header is updated. Jan 23 01:00:02.148547 disk-uuid[646]: Secondary Entries is updated. Jan 23 01:00:02.148547 disk-uuid[646]: Secondary Header is updated. Jan 23 01:00:02.160770 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:00:02.172774 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:00:02.290919 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 23 01:00:02.397029 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jan 23 01:00:02.397116 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jan 23 01:00:02.404067 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jan 23 01:00:02.404161 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jan 23 01:00:02.410787 kernel: ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jan 23 01:00:02.410900 kernel: ata1.00: LPM support broken, forcing max_power Jan 23 01:00:02.416556 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jan 23 01:00:02.416622 kernel: ata1.00: applying bridge limits Jan 23 01:00:02.426580 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jan 23 01:00:02.426671 kernel: ata1.00: LPM support broken, forcing max_power Jan 23 01:00:02.429903 kernel: ata1.00: configured for UDMA/100 Jan 23 01:00:02.437849 kernel: scsi 1:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 23 01:00:02.443771 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 01:00:02.468807 kernel: usbcore: registered new interface driver usbhid Jan 23 01:00:02.468875 kernel: usbhid: USB HID core driver Jan 23 01:00:02.476978 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input3 Jan 23 01:00:02.477034 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 23 01:00:02.497732 kernel: sr 1:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jan 23 01:00:02.497994 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 01:00:02.517878 kernel: sr 1:0:0:0: Attached scsi CD-ROM sr0 Jan 23 01:00:02.860723 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 01:00:02.862591 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:00:02.863863 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:00:02.865329 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:00:02.868555 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 01:00:02.913098 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:00:03.183581 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 23 01:00:03.186250 disk-uuid[647]: The operation has completed successfully. Jan 23 01:00:03.282629 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 01:00:03.282836 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 01:00:03.332383 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 01:00:03.369429 sh[681]: Success Jan 23 01:00:03.408500 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 01:00:03.408578 kernel: device-mapper: uevent: version 1.0.3 Jan 23 01:00:03.411999 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 01:00:03.437835 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 01:00:03.518363 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 01:00:03.523899 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 01:00:03.540589 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 01:00:03.558822 kernel: BTRFS: device fsid f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (693) Jan 23 01:00:03.566014 kernel: BTRFS info (device dm-0): first mount of filesystem f8eb2396-46b8-49a3-a8e7-cd8ad10a3ce4 Jan 23 01:00:03.566065 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:00:03.588296 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 01:00:03.588356 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 01:00:03.588379 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 01:00:03.594727 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 01:00:03.596420 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:00:03.597957 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 01:00:03.599400 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 01:00:03.604952 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 01:00:03.651855 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (726) Jan 23 01:00:03.663727 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:00:03.663830 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:00:03.688369 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:00:03.688449 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:00:03.688473 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:00:03.698825 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:00:03.701978 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 01:00:03.705986 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 01:00:03.795773 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:00:03.798879 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:00:03.861364 ignition[797]: Ignition 2.22.0 Jan 23 01:00:03.862070 ignition[797]: Stage: fetch-offline Jan 23 01:00:03.862258 ignition[797]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:00:03.862272 ignition[797]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 01:00:03.862344 ignition[797]: parsed url from cmdline: "" Jan 23 01:00:03.864120 systemd-networkd[863]: lo: Link UP Jan 23 01:00:03.862347 ignition[797]: no config URL provided Jan 23 01:00:03.864126 systemd-networkd[863]: lo: Gained carrier Jan 23 01:00:03.862352 ignition[797]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:00:03.866955 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:00:03.862359 ignition[797]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:00:03.866969 systemd-networkd[863]: Enumeration completed Jan 23 01:00:03.862363 ignition[797]: failed to fetch config: resource requires networking Jan 23 01:00:03.867433 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:00:03.863092 ignition[797]: Ignition finished successfully Jan 23 01:00:03.867438 systemd-networkd[863]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:00:03.868587 systemd-networkd[863]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:00:03.868592 systemd-networkd[863]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:00:03.868674 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:00:03.869787 systemd-networkd[863]: eth0: Link UP Jan 23 01:00:03.869993 systemd-networkd[863]: eth1: Link UP Jan 23 01:00:03.870186 systemd-networkd[863]: eth0: Gained carrier Jan 23 01:00:03.870196 systemd-networkd[863]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:00:03.872018 systemd[1]: Reached target network.target - Network. Jan 23 01:00:03.874870 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 01:00:03.876278 systemd-networkd[863]: eth1: Gained carrier Jan 23 01:00:03.876301 systemd-networkd[863]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:00:03.900625 ignition[872]: Ignition 2.22.0 Jan 23 01:00:03.900637 ignition[872]: Stage: fetch Jan 23 01:00:03.900818 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:00:03.900829 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 01:00:03.900905 ignition[872]: parsed url from cmdline: "" Jan 23 01:00:03.900909 ignition[872]: no config URL provided Jan 23 01:00:03.900914 ignition[872]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 01:00:03.900920 ignition[872]: no config at "/usr/lib/ignition/user.ign" Jan 23 01:00:03.900941 ignition[872]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 23 01:00:03.901067 ignition[872]: GET error: Get "http://169.254.169.254/hetzner/v1/userdata": dial tcp 169.254.169.254:80: connect: network is unreachable Jan 23 01:00:03.916825 systemd-networkd[863]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 01:00:03.932872 systemd-networkd[863]: eth0: DHCPv4 address 89.167.6.201/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 01:00:04.101384 ignition[872]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #2 Jan 23 01:00:04.110069 ignition[872]: GET result: OK Jan 23 01:00:04.110204 ignition[872]: parsing config with SHA512: 16753fbfa363f591aa398cd1794d49205af0481165b04b2bfcb3c7eb13988b7a429ef4dd2eabbc6535a26a2b365d5a0496eb4733bd04c7a541640b782aac1d3e Jan 23 01:00:04.116586 unknown[872]: fetched base config from "system" Jan 23 01:00:04.116635 unknown[872]: fetched base config from "system" Jan 23 01:00:04.117476 ignition[872]: fetch: fetch complete Jan 23 01:00:04.116648 unknown[872]: fetched user config from "hetzner" Jan 23 01:00:04.117488 ignition[872]: fetch: fetch passed Jan 23 01:00:04.117572 ignition[872]: Ignition finished successfully Jan 23 01:00:04.123563 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 01:00:04.128398 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 01:00:04.183280 ignition[879]: Ignition 2.22.0 Jan 23 01:00:04.183293 ignition[879]: Stage: kargs Jan 23 01:00:04.183404 ignition[879]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:00:04.183413 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 01:00:04.187821 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 01:00:04.184172 ignition[879]: kargs: kargs passed Jan 23 01:00:04.184230 ignition[879]: Ignition finished successfully Jan 23 01:00:04.190947 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 01:00:04.229876 ignition[885]: Ignition 2.22.0 Jan 23 01:00:04.229907 ignition[885]: Stage: disks Jan 23 01:00:04.230193 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 23 01:00:04.230216 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 01:00:04.232493 ignition[885]: disks: disks passed Jan 23 01:00:04.232587 ignition[885]: Ignition finished successfully Jan 23 01:00:04.236009 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 01:00:04.238210 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 01:00:04.239896 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 01:00:04.240720 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:00:04.242267 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:00:04.243587 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:00:04.246848 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 01:00:04.280204 systemd-fsck[893]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Jan 23 01:00:04.284714 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 01:00:04.287958 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 01:00:04.417191 kernel: EXT4-fs (sda9): mounted filesystem 2036722e-4586-420e-8dc7-a3b65e840c36 r/w with ordered data mode. Quota mode: none. Jan 23 01:00:04.417506 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 01:00:04.418331 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 01:00:04.420203 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:00:04.422815 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 01:00:04.424387 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 01:00:04.425853 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 01:00:04.426530 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:00:04.441552 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 01:00:04.445923 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 01:00:04.455863 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (901) Jan 23 01:00:04.469767 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:00:04.469853 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:00:04.481043 coreos-metadata[903]: Jan 23 01:00:04.480 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 23 01:00:04.482311 coreos-metadata[903]: Jan 23 01:00:04.482 INFO Fetch successful Jan 23 01:00:04.483615 coreos-metadata[903]: Jan 23 01:00:04.483 INFO wrote hostname ci-4459-2-2-n-ad7bb25bb9 to /sysroot/etc/hostname Jan 23 01:00:04.486801 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:00:04.486856 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:00:04.490128 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:00:04.492274 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:00:04.493169 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 01:00:04.514456 initrd-setup-root[929]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 01:00:04.521700 initrd-setup-root[936]: cut: /sysroot/etc/group: No such file or directory Jan 23 01:00:04.528457 initrd-setup-root[943]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 01:00:04.533252 initrd-setup-root[950]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 01:00:04.638607 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 01:00:04.640657 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 01:00:04.641998 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 01:00:04.662095 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 01:00:04.668812 kernel: BTRFS info (device sda6): last unmount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:00:04.677277 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 01:00:04.703039 ignition[1020]: INFO : Ignition 2.22.0 Jan 23 01:00:04.703039 ignition[1020]: INFO : Stage: mount Jan 23 01:00:04.703941 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:00:04.703941 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 01:00:04.703941 ignition[1020]: INFO : mount: mount passed Jan 23 01:00:04.703941 ignition[1020]: INFO : Ignition finished successfully Jan 23 01:00:04.706266 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 01:00:04.707965 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 01:00:04.727278 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 01:00:04.745798 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1029) Jan 23 01:00:04.750373 kernel: BTRFS info (device sda6): first mount of filesystem a3ccc207-e674-4ba2-b6d8-404b4581ba01 Jan 23 01:00:04.750421 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 23 01:00:04.755889 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 23 01:00:04.755925 kernel: BTRFS info (device sda6): turning on async discard Jan 23 01:00:04.755935 kernel: BTRFS info (device sda6): enabling free space tree Jan 23 01:00:04.759187 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 01:00:04.796352 ignition[1046]: INFO : Ignition 2.22.0 Jan 23 01:00:04.796352 ignition[1046]: INFO : Stage: files Jan 23 01:00:04.798441 ignition[1046]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:00:04.798441 ignition[1046]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 01:00:04.798441 ignition[1046]: DEBUG : files: compiled without relabeling support, skipping Jan 23 01:00:04.798441 ignition[1046]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 01:00:04.798441 ignition[1046]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 01:00:04.804032 ignition[1046]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 01:00:04.804032 ignition[1046]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 01:00:04.804032 ignition[1046]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 01:00:04.801336 unknown[1046]: wrote ssh authorized keys file for user: core Jan 23 01:00:04.805439 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:00:04.805439 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 01:00:05.027117 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 01:00:05.331391 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 01:00:05.331391 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 01:00:05.334295 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 01:00:05.334295 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:00:05.334295 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 01:00:05.334295 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:00:05.334295 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 01:00:05.334295 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:00:05.334295 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 01:00:05.341079 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:00:05.341079 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 01:00:05.341079 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:00:05.341079 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:00:05.341079 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:00:05.341079 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 23 01:00:05.528012 systemd-networkd[863]: eth1: Gained IPv6LL Jan 23 01:00:05.720086 systemd-networkd[863]: eth0: Gained IPv6LL Jan 23 01:00:05.780090 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 01:00:06.117485 ignition[1046]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 23 01:00:06.117485 ignition[1046]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 01:00:06.119685 ignition[1046]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:00:06.123123 ignition[1046]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 01:00:06.123123 ignition[1046]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 01:00:06.123123 ignition[1046]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 23 01:00:06.123123 ignition[1046]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 01:00:06.128517 ignition[1046]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 23 01:00:06.128517 ignition[1046]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 23 01:00:06.128517 ignition[1046]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 23 01:00:06.128517 ignition[1046]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 01:00:06.128517 ignition[1046]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:00:06.128517 ignition[1046]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 01:00:06.128517 ignition[1046]: INFO : files: files passed Jan 23 01:00:06.128517 ignition[1046]: INFO : Ignition finished successfully Jan 23 01:00:06.128576 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 01:00:06.135097 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 01:00:06.141982 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 01:00:06.155630 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 01:00:06.156207 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 01:00:06.164361 initrd-setup-root-after-ignition[1076]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:00:06.165960 initrd-setup-root-after-ignition[1076]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:00:06.166356 initrd-setup-root-after-ignition[1080]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 01:00:06.168829 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:00:06.169930 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 01:00:06.171926 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 01:00:06.236236 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 01:00:06.236423 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 01:00:06.238111 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 01:00:06.239492 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 01:00:06.241040 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 01:00:06.242414 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 01:00:06.294395 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:00:06.298772 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 01:00:06.332775 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:00:06.334666 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:00:06.336370 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 01:00:06.337391 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 01:00:06.337614 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 01:00:06.340152 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 01:00:06.341998 systemd[1]: Stopped target basic.target - Basic System. Jan 23 01:00:06.343590 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 01:00:06.345180 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 01:00:06.346855 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 01:00:06.348952 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 01:00:06.350784 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 01:00:06.352625 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 01:00:06.354382 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 01:00:06.356131 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 01:00:06.357931 systemd[1]: Stopped target swap.target - Swaps. Jan 23 01:00:06.359654 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 01:00:06.359881 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 01:00:06.362349 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:00:06.364175 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:00:06.365871 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 01:00:06.366010 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:00:06.367661 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 01:00:06.367929 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 01:00:06.370876 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 01:00:06.371128 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 01:00:06.372869 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 01:00:06.373090 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 01:00:06.374561 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 01:00:06.374846 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 01:00:06.378916 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 01:00:06.382099 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 01:00:06.383863 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 01:00:06.384093 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:00:06.387130 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 01:00:06.387321 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 01:00:06.398193 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 01:00:06.398367 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 01:00:06.427465 ignition[1100]: INFO : Ignition 2.22.0 Jan 23 01:00:06.427465 ignition[1100]: INFO : Stage: umount Jan 23 01:00:06.431634 ignition[1100]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 01:00:06.431634 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 23 01:00:06.431634 ignition[1100]: INFO : umount: umount passed Jan 23 01:00:06.431634 ignition[1100]: INFO : Ignition finished successfully Jan 23 01:00:06.432488 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 01:00:06.432702 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 01:00:06.433651 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 01:00:06.433731 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 01:00:06.434508 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 01:00:06.434575 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 01:00:06.435354 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 01:00:06.435416 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 01:00:06.439000 systemd[1]: Stopped target network.target - Network. Jan 23 01:00:06.440446 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 01:00:06.440550 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 01:00:06.442165 systemd[1]: Stopped target paths.target - Path Units. Jan 23 01:00:06.442903 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 01:00:06.446839 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:00:06.447906 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 01:00:06.449424 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 01:00:06.451030 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 01:00:06.451106 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 01:00:06.452843 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 01:00:06.452937 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 01:00:06.455919 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 01:00:06.456039 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 01:00:06.458308 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 01:00:06.458385 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 01:00:06.459978 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 01:00:06.461438 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 01:00:06.465639 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 01:00:06.473170 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 01:00:06.473337 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 01:00:06.480832 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 01:00:06.481405 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 01:00:06.481461 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:00:06.485987 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:00:06.486204 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 01:00:06.486296 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 01:00:06.487520 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 01:00:06.487873 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 01:00:06.487967 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 01:00:06.489536 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 01:00:06.490579 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 01:00:06.490615 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:00:06.491530 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 01:00:06.491573 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 01:00:06.493860 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 01:00:06.494337 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 01:00:06.494381 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 01:00:06.496832 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 01:00:06.496872 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:00:06.498643 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 01:00:06.498681 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 01:00:06.500268 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:00:06.501432 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 01:00:06.517236 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 01:00:06.517414 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:00:06.518890 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 01:00:06.518980 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 01:00:06.520460 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 01:00:06.520517 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 01:00:06.521205 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 01:00:06.521236 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:00:06.522184 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 01:00:06.522226 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 01:00:06.524017 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 01:00:06.524056 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 01:00:06.525761 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 01:00:06.525802 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 01:00:06.528853 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 01:00:06.529296 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 01:00:06.529336 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:00:06.531714 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 01:00:06.531855 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:00:06.533913 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 01:00:06.533954 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:00:06.534699 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 01:00:06.534734 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:00:06.535689 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:00:06.535727 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:00:06.543371 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 01:00:06.543467 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 01:00:06.544164 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 01:00:06.545943 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 01:00:06.573214 systemd[1]: Switching root. Jan 23 01:00:06.611642 systemd-journald[197]: Journal stopped Jan 23 01:00:07.835369 systemd-journald[197]: Received SIGTERM from PID 1 (systemd). Jan 23 01:00:07.835436 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 01:00:07.835452 kernel: SELinux: policy capability open_perms=1 Jan 23 01:00:07.835461 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 01:00:07.835469 kernel: SELinux: policy capability always_check_network=0 Jan 23 01:00:07.835478 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 01:00:07.835489 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 01:00:07.835498 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 01:00:07.835506 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 01:00:07.835515 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 01:00:07.835523 kernel: audit: type=1403 audit(1769130006.841:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 01:00:07.835536 systemd[1]: Successfully loaded SELinux policy in 93.992ms. Jan 23 01:00:07.835559 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.494ms. Jan 23 01:00:07.835568 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 01:00:07.835580 systemd[1]: Detected virtualization kvm. Jan 23 01:00:07.835593 systemd[1]: Detected architecture x86-64. Jan 23 01:00:07.835601 systemd[1]: Detected first boot. Jan 23 01:00:07.835610 systemd[1]: Hostname set to . Jan 23 01:00:07.835619 systemd[1]: Initializing machine ID from VM UUID. Jan 23 01:00:07.835628 zram_generator::config[1143]: No configuration found. Jan 23 01:00:07.835642 kernel: Guest personality initialized and is inactive Jan 23 01:00:07.835650 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Jan 23 01:00:07.835661 kernel: Initialized host personality Jan 23 01:00:07.835669 kernel: NET: Registered PF_VSOCK protocol family Jan 23 01:00:07.835678 systemd[1]: Populated /etc with preset unit settings. Jan 23 01:00:07.835687 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 01:00:07.835696 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 01:00:07.835705 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 01:00:07.835713 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 01:00:07.835724 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 01:00:07.835735 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 01:00:07.835754 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 01:00:07.835764 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 01:00:07.835773 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 01:00:07.835782 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 01:00:07.835791 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 01:00:07.835800 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 01:00:07.835810 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 01:00:07.835818 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 01:00:07.835837 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 01:00:07.835846 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 01:00:07.835855 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 01:00:07.835865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 01:00:07.835873 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 01:00:07.835887 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 01:00:07.835898 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 01:00:07.835907 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 01:00:07.835916 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 01:00:07.835925 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 01:00:07.835933 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 01:00:07.835942 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 01:00:07.835951 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 01:00:07.835960 systemd[1]: Reached target slices.target - Slice Units. Jan 23 01:00:07.835969 systemd[1]: Reached target swap.target - Swaps. Jan 23 01:00:07.835980 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 01:00:07.835989 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 01:00:07.835998 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 01:00:07.836006 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 01:00:07.836015 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 01:00:07.836024 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 01:00:07.836033 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 01:00:07.836042 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 01:00:07.836051 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 01:00:07.836061 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 01:00:07.836070 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:00:07.836079 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 01:00:07.836088 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 01:00:07.836096 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 01:00:07.836105 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 01:00:07.836115 systemd[1]: Reached target machines.target - Containers. Jan 23 01:00:07.836124 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 01:00:07.836134 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:00:07.836144 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 01:00:07.836152 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 01:00:07.836162 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:00:07.836171 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:00:07.836179 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:00:07.836188 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 01:00:07.836197 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:00:07.836206 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 01:00:07.836216 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 01:00:07.836225 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 01:00:07.836234 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 01:00:07.836243 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 01:00:07.836251 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:00:07.836260 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 01:00:07.836269 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 01:00:07.836280 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 01:00:07.836290 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 01:00:07.836299 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 01:00:07.836307 kernel: loop: module loaded Jan 23 01:00:07.836318 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 01:00:07.836327 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 01:00:07.836336 systemd[1]: Stopped verity-setup.service. Jan 23 01:00:07.836345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:00:07.836355 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 01:00:07.836363 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 01:00:07.836372 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 01:00:07.836399 systemd-journald[1224]: Collecting audit messages is disabled. Jan 23 01:00:07.836425 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 01:00:07.836435 kernel: fuse: init (API version 7.41) Jan 23 01:00:07.836445 systemd-journald[1224]: Journal started Jan 23 01:00:07.836461 systemd-journald[1224]: Runtime Journal (/run/log/journal/e3b69e7928c3420e8a6b43e0c2b0dce4) is 8M, max 76.1M, 68.1M free. Jan 23 01:00:07.477487 systemd[1]: Queued start job for default target multi-user.target. Jan 23 01:00:07.504322 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 23 01:00:07.505310 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 01:00:07.836884 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 01:00:07.848795 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 01:00:07.841894 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 01:00:07.844046 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 01:00:07.844651 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 01:00:07.845280 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 01:00:07.845458 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 01:00:07.846103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:00:07.846255 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:00:07.846970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:00:07.847118 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:00:07.848222 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 01:00:07.848397 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 01:00:07.849465 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:00:07.850182 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:00:07.850888 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 01:00:07.852194 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 01:00:07.852835 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 01:00:07.853496 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 01:00:07.867768 kernel: ACPI: bus type drm_connector registered Jan 23 01:00:07.869890 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:00:07.870125 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:00:07.871143 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 01:00:07.874907 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 01:00:07.876816 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 01:00:07.877238 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 01:00:07.877292 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 01:00:07.880271 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 01:00:07.883856 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 01:00:07.884649 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:00:07.891848 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 01:00:07.894823 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 01:00:07.895210 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:00:07.897037 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 01:00:07.897402 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:00:07.900454 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 01:00:07.903237 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 01:00:07.909153 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 01:00:07.913093 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 01:00:07.913505 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 01:00:07.919996 systemd-journald[1224]: Time spent on flushing to /var/log/journal/e3b69e7928c3420e8a6b43e0c2b0dce4 is 15.838ms for 1244 entries. Jan 23 01:00:07.919996 systemd-journald[1224]: System Journal (/var/log/journal/e3b69e7928c3420e8a6b43e0c2b0dce4) is 8M, max 584.8M, 576.8M free. Jan 23 01:00:07.949734 systemd-journald[1224]: Received client request to flush runtime journal. Jan 23 01:00:07.932655 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 01:00:07.936274 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 01:00:07.942586 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 01:00:07.957981 kernel: loop0: detected capacity change from 0 to 8 Jan 23 01:00:07.969800 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 01:00:07.978816 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 01:00:07.983214 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 23 01:00:07.983224 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 23 01:00:07.984815 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 01:00:07.991048 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 01:00:07.998812 kernel: loop1: detected capacity change from 0 to 128560 Jan 23 01:00:07.997256 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 01:00:08.009941 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 01:00:08.041900 kernel: loop2: detected capacity change from 0 to 229808 Jan 23 01:00:08.066464 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 01:00:08.068071 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 01:00:08.093235 kernel: loop3: detected capacity change from 0 to 110984 Jan 23 01:00:08.099817 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 01:00:08.113911 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 23 01:00:08.114434 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. Jan 23 01:00:08.125545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 01:00:08.144765 kernel: loop4: detected capacity change from 0 to 8 Jan 23 01:00:08.148790 kernel: loop5: detected capacity change from 0 to 128560 Jan 23 01:00:08.170905 kernel: loop6: detected capacity change from 0 to 229808 Jan 23 01:00:08.190674 kernel: loop7: detected capacity change from 0 to 110984 Jan 23 01:00:08.205334 (sd-merge)[1297]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-hetzner'. Jan 23 01:00:08.206214 (sd-merge)[1297]: Merged extensions into '/usr'. Jan 23 01:00:08.209556 systemd[1]: Reload requested from client PID 1268 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 01:00:08.210560 systemd[1]: Reloading... Jan 23 01:00:08.326766 zram_generator::config[1325]: No configuration found. Jan 23 01:00:08.402788 ldconfig[1263]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 01:00:08.501036 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 01:00:08.501340 systemd[1]: Reloading finished in 287 ms. Jan 23 01:00:08.532366 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 01:00:08.533161 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 01:00:08.536618 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 01:00:08.543056 systemd[1]: Starting ensure-sysext.service... Jan 23 01:00:08.546880 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 01:00:08.548255 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 01:00:08.561235 systemd[1]: Reload requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... Jan 23 01:00:08.561250 systemd[1]: Reloading... Jan 23 01:00:08.590478 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 01:00:08.590677 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 01:00:08.590996 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 01:00:08.591204 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 01:00:08.591943 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 01:00:08.592139 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 23 01:00:08.592197 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. Jan 23 01:00:08.599967 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:00:08.599978 systemd-tmpfiles[1369]: Skipping /boot Jan 23 01:00:08.616172 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 01:00:08.616186 systemd-tmpfiles[1369]: Skipping /boot Jan 23 01:00:08.638336 systemd-udevd[1370]: Using default interface naming scheme 'v255'. Jan 23 01:00:08.649794 zram_generator::config[1398]: No configuration found. Jan 23 01:00:08.858380 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 01:00:08.858464 systemd[1]: Reloading finished in 296 ms. Jan 23 01:00:08.870378 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 01:00:08.871435 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 01:00:08.881766 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Jan 23 01:00:08.885773 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 01:00:08.909113 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:00:08.914795 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:00:08.919760 kernel: ACPI: button: Power Button [PWRF] Jan 23 01:00:08.920120 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 01:00:08.920557 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:00:08.922317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 01:00:08.926447 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 01:00:08.927941 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 01:00:08.928477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:00:08.928862 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:00:08.930567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 01:00:08.935598 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 01:00:08.941891 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 01:00:08.947979 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 01:00:08.948346 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:00:08.951982 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 23 01:00:08.953211 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:00:08.953338 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:00:08.953456 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:00:08.953510 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:00:08.956571 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 01:00:08.956934 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:00:08.959694 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:00:08.960322 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 01:00:08.963964 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 01:00:08.964515 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 01:00:08.964644 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 01:00:08.964797 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 01:00:08.969377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 01:00:08.970167 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 01:00:08.971436 systemd[1]: Finished ensure-sysext.service. Jan 23 01:00:08.972682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 01:00:08.973044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 01:00:08.973923 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 01:00:08.974168 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 01:00:08.979561 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 01:00:08.979666 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 01:00:08.981909 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 23 01:00:08.988019 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 01:00:09.001188 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 01:00:09.001407 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 01:00:09.021674 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 01:00:09.025529 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 01:00:09.036736 augenrules[1524]: No rules Jan 23 01:00:09.039157 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:00:09.039421 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:00:09.045864 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 01:00:09.049683 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 01:00:09.070259 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 01:00:09.070957 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 01:00:09.114201 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jan 23 01:00:09.114521 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jan 23 01:00:09.114717 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jan 23 01:00:09.175671 systemd-networkd[1490]: lo: Link UP Jan 23 01:00:09.175979 systemd-networkd[1490]: lo: Gained carrier Jan 23 01:00:09.182561 systemd-networkd[1490]: Enumeration completed Jan 23 01:00:09.182741 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 01:00:09.184223 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:00:09.184815 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:00:09.185464 systemd-networkd[1490]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:00:09.185648 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 01:00:09.187545 systemd-networkd[1490]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 01:00:09.188867 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 01:00:09.192190 systemd-networkd[1490]: eth0: Link UP Jan 23 01:00:09.192310 systemd-networkd[1490]: eth0: Gained carrier Jan 23 01:00:09.192323 systemd-networkd[1490]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:00:09.197089 systemd-networkd[1490]: eth1: Link UP Jan 23 01:00:09.200067 systemd-networkd[1490]: eth1: Gained carrier Jan 23 01:00:09.200137 systemd-networkd[1490]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 01:00:09.218206 systemd-resolved[1496]: Positive Trust Anchors: Jan 23 01:00:09.218223 systemd-resolved[1496]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 01:00:09.218252 systemd-resolved[1496]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 01:00:09.219407 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 23 01:00:09.219898 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 01:00:09.223373 systemd-resolved[1496]: Using system hostname 'ci-4459-2-2-n-ad7bb25bb9'. Jan 23 01:00:09.223801 systemd-networkd[1490]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 23 01:00:09.225672 systemd-timesyncd[1509]: Network configuration changed, trying to establish connection. Jan 23 01:00:09.227486 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 01:00:09.227952 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 01:00:09.233953 systemd[1]: Reached target network.target - Network. Jan 23 01:00:09.235386 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 01:00:09.236081 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 01:00:09.236563 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 01:00:09.237055 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 01:00:09.237526 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 01:00:09.238123 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 01:00:09.238572 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 01:00:09.239032 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 01:00:09.239417 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 01:00:09.239475 systemd[1]: Reached target paths.target - Path Units. Jan 23 01:00:09.239896 systemd[1]: Reached target timers.target - Timer Units. Jan 23 01:00:09.241157 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 01:00:09.243035 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 01:00:09.245137 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 01:00:09.245713 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 01:00:09.246191 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 01:00:09.249253 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 01:00:09.250298 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 01:00:09.251488 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 01:00:09.253660 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 01:00:09.255768 kernel: EDAC MC: Ver: 3.0.0 Jan 23 01:00:09.254539 systemd[1]: Reached target basic.target - Basic System. Jan 23 01:00:09.254969 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:00:09.254990 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 01:00:09.255965 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 01:00:09.256829 systemd-networkd[1490]: eth0: DHCPv4 address 89.167.6.201/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 23 01:00:09.258079 systemd-timesyncd[1509]: Network configuration changed, trying to establish connection. Jan 23 01:00:09.258947 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 01:00:09.262951 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 01:00:09.270923 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 01:00:09.277905 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 01:00:09.281728 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 01:00:09.283802 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 01:00:09.285905 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 01:00:09.288910 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 01:00:09.293812 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 01:00:09.302820 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 23 01:00:09.305370 jq[1565]: false Jan 23 01:00:09.306468 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 01:00:09.308966 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 01:00:09.312998 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing passwd entry cache Jan 23 01:00:09.313005 oslogin_cache_refresh[1568]: Refreshing passwd entry cache Jan 23 01:00:09.319289 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 01:00:09.320353 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 01:00:09.321868 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 01:00:09.322534 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 01:00:09.325107 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 01:00:09.326881 extend-filesystems[1566]: Found /dev/sda6 Jan 23 01:00:09.332506 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting users, quitting Jan 23 01:00:09.332506 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:00:09.332506 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Refreshing group entry cache Jan 23 01:00:09.332162 oslogin_cache_refresh[1568]: Failure getting users, quitting Jan 23 01:00:09.332180 oslogin_cache_refresh[1568]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 01:00:09.332216 oslogin_cache_refresh[1568]: Refreshing group entry cache Jan 23 01:00:09.332948 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Failure getting groups, quitting Jan 23 01:00:09.332948 google_oslogin_nss_cache[1568]: oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:00:09.332899 oslogin_cache_refresh[1568]: Failure getting groups, quitting Jan 23 01:00:09.332909 oslogin_cache_refresh[1568]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 01:00:09.338309 extend-filesystems[1566]: Found /dev/sda9 Jan 23 01:00:09.341968 extend-filesystems[1566]: Checking size of /dev/sda9 Jan 23 01:00:09.344221 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 01:00:09.344850 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 01:00:09.345035 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 01:00:09.345337 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 01:00:09.346803 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 01:00:09.353772 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 01:00:09.358846 jq[1584]: true Jan 23 01:00:09.366073 extend-filesystems[1566]: Resized partition /dev/sda9 Jan 23 01:00:09.363991 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 01:00:09.392635 extend-filesystems[1600]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 01:00:09.393932 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 01:00:09.394151 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 01:00:09.399899 update_engine[1583]: I20260123 01:00:09.399372 1583 main.cc:92] Flatcar Update Engine starting Jan 23 01:00:09.400094 coreos-metadata[1562]: Jan 23 01:00:09.399 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 23 01:00:09.402124 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 23 01:00:09.403438 tar[1595]: linux-amd64/LICENSE Jan 23 01:00:09.403616 tar[1595]: linux-amd64/helm Jan 23 01:00:09.408768 coreos-metadata[1562]: Jan 23 01:00:09.404 INFO Fetch successful Jan 23 01:00:09.408768 coreos-metadata[1562]: Jan 23 01:00:09.404 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 23 01:00:09.408768 coreos-metadata[1562]: Jan 23 01:00:09.405 INFO Fetch successful Jan 23 01:00:09.408195 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 01:00:09.413207 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:00:09.414344 jq[1601]: true Jan 23 01:00:09.418511 (ntainerd)[1610]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 01:00:09.422115 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 19393531 blocks Jan 23 01:00:09.457695 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:00:09.457929 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:00:09.461059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:00:09.475088 dbus-daemon[1563]: [system] SELinux support is enabled Jan 23 01:00:09.475486 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 01:00:09.484150 kernel: [drm] pci: virtio-vga detected at 0000:00:01.0 Jan 23 01:00:09.487413 kernel: Console: switching to colour dummy device 80x25 Jan 23 01:00:09.486977 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 01:00:09.487082 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 01:00:09.488190 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 01:00:09.488281 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 01:00:09.492431 kernel: virtio-pci 0000:00:01.0: vgaarb: deactivate vga console Jan 23 01:00:09.492634 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 23 01:00:09.492649 kernel: [drm] features: -context_init Jan 23 01:00:09.496007 kernel: [drm] number of scanouts: 1 Jan 23 01:00:09.496035 kernel: [drm] number of cap sets: 0 Jan 23 01:00:09.499770 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 23 01:00:09.502778 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 23 01:00:09.510288 kernel: Console: switching to colour frame buffer device 160x50 Jan 23 01:00:09.510298 systemd[1]: Started update-engine.service - Update Engine. Jan 23 01:00:09.512244 update_engine[1583]: I20260123 01:00:09.512195 1583 update_check_scheduler.cc:74] Next update check in 10m12s Jan 23 01:00:09.520608 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 23 01:00:09.521130 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 01:00:09.542140 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 01:00:09.581790 bash[1640]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:00:09.585553 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 01:00:09.589589 systemd[1]: Starting sshkeys.service... Jan 23 01:00:09.619558 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 01:00:09.620388 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 01:00:09.651984 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 01:00:09.655810 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 01:00:09.665768 containerd[1610]: time="2026-01-23T01:00:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 01:00:09.665768 containerd[1610]: time="2026-01-23T01:00:09.664410394Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 01:00:09.687083 containerd[1610]: time="2026-01-23T01:00:09.687053853Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.31µs" Jan 23 01:00:09.687853 sshd_keygen[1599]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 01:00:09.688137 containerd[1610]: time="2026-01-23T01:00:09.688111834Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 01:00:09.688203 containerd[1610]: time="2026-01-23T01:00:09.688191944Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 01:00:09.688354 containerd[1610]: time="2026-01-23T01:00:09.688343714Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 01:00:09.688394 containerd[1610]: time="2026-01-23T01:00:09.688386254Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 01:00:09.688436 containerd[1610]: time="2026-01-23T01:00:09.688428754Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:00:09.688509 containerd[1610]: time="2026-01-23T01:00:09.688498584Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 01:00:09.688544 containerd[1610]: time="2026-01-23T01:00:09.688536394Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:00:09.688765 containerd[1610]: time="2026-01-23T01:00:09.688735354Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 01:00:09.689219 containerd[1610]: time="2026-01-23T01:00:09.689204405Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:00:09.689272 containerd[1610]: time="2026-01-23T01:00:09.689263385Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 01:00:09.689303 containerd[1610]: time="2026-01-23T01:00:09.689296415Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 01:00:09.689557 containerd[1610]: time="2026-01-23T01:00:09.689543565Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 01:00:09.692552 containerd[1610]: time="2026-01-23T01:00:09.691973997Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:00:09.692552 containerd[1610]: time="2026-01-23T01:00:09.692005837Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 01:00:09.692552 containerd[1610]: time="2026-01-23T01:00:09.692013387Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 01:00:09.692552 containerd[1610]: time="2026-01-23T01:00:09.692039857Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 01:00:09.692552 containerd[1610]: time="2026-01-23T01:00:09.692208507Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 01:00:09.692552 containerd[1610]: time="2026-01-23T01:00:09.692254447Z" level=info msg="metadata content store policy set" policy=shared Jan 23 01:00:09.694158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 01:00:09.694539 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:00:09.698462 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 01:00:09.714208 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 01:00:09.731150 containerd[1610]: time="2026-01-23T01:00:09.731118980Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731261880Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731281270Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731291830Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731357850Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731366960Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731378600Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731407580Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731416070Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731423940Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731430540Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731440600Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731572790Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731589130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 01:00:09.732119 containerd[1610]: time="2026-01-23T01:00:09.731599640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 01:00:09.732326 containerd[1610]: time="2026-01-23T01:00:09.731615090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 01:00:09.732326 containerd[1610]: time="2026-01-23T01:00:09.731623380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 01:00:09.732326 containerd[1610]: time="2026-01-23T01:00:09.731644770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 01:00:09.732326 containerd[1610]: time="2026-01-23T01:00:09.731654240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 01:00:09.732326 containerd[1610]: time="2026-01-23T01:00:09.731662070Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 01:00:09.732326 containerd[1610]: time="2026-01-23T01:00:09.731673520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 01:00:09.732326 containerd[1610]: time="2026-01-23T01:00:09.731685700Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 01:00:09.732326 containerd[1610]: time="2026-01-23T01:00:09.731693100Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 01:00:09.732326 containerd[1610]: time="2026-01-23T01:00:09.731810890Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 01:00:09.732326 containerd[1610]: time="2026-01-23T01:00:09.731826900Z" level=info msg="Start snapshots syncer" Jan 23 01:00:09.732326 containerd[1610]: time="2026-01-23T01:00:09.731858310Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 01:00:09.732674 containerd[1610]: time="2026-01-23T01:00:09.732104441Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 01:00:09.732674 containerd[1610]: time="2026-01-23T01:00:09.732562161Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 01:00:09.733625 containerd[1610]: time="2026-01-23T01:00:09.733610232Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 01:00:09.733802 containerd[1610]: time="2026-01-23T01:00:09.733780582Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 01:00:09.734550 containerd[1610]: time="2026-01-23T01:00:09.734532363Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 01:00:09.734605 containerd[1610]: time="2026-01-23T01:00:09.734596623Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 01:00:09.734640 containerd[1610]: time="2026-01-23T01:00:09.734633353Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 01:00:09.734678 containerd[1610]: time="2026-01-23T01:00:09.734671643Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 01:00:09.734709 containerd[1610]: time="2026-01-23T01:00:09.734702163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 01:00:09.734766 containerd[1610]: time="2026-01-23T01:00:09.734736263Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 01:00:09.734864 containerd[1610]: time="2026-01-23T01:00:09.734848373Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 01:00:09.734908 containerd[1610]: time="2026-01-23T01:00:09.734899053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 01:00:09.734939 containerd[1610]: time="2026-01-23T01:00:09.734931293Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 01:00:09.734998 containerd[1610]: time="2026-01-23T01:00:09.734991283Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:00:09.735031 containerd[1610]: time="2026-01-23T01:00:09.735023633Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 01:00:09.735058 containerd[1610]: time="2026-01-23T01:00:09.735051553Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:00:09.735094 containerd[1610]: time="2026-01-23T01:00:09.735087053Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 01:00:09.735122 containerd[1610]: time="2026-01-23T01:00:09.735116223Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 01:00:09.735203 containerd[1610]: time="2026-01-23T01:00:09.735195613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 01:00:09.735238 containerd[1610]: time="2026-01-23T01:00:09.735231333Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 01:00:09.735271 containerd[1610]: time="2026-01-23T01:00:09.735264943Z" level=info msg="runtime interface created" Jan 23 01:00:09.735297 containerd[1610]: time="2026-01-23T01:00:09.735291403Z" level=info msg="created NRI interface" Jan 23 01:00:09.735331 containerd[1610]: time="2026-01-23T01:00:09.735323573Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 01:00:09.735358 containerd[1610]: time="2026-01-23T01:00:09.735352393Z" level=info msg="Connect containerd service" Jan 23 01:00:09.735392 containerd[1610]: time="2026-01-23T01:00:09.735386213Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 01:00:09.738545 containerd[1610]: time="2026-01-23T01:00:09.738412256Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:00:09.747772 systemd-logind[1580]: New seat seat0. Jan 23 01:00:09.748122 coreos-metadata[1657]: Jan 23 01:00:09.747 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 23 01:00:09.752713 systemd-logind[1580]: Watching system buttons on /dev/input/event3 (Power Button) Jan 23 01:00:09.752734 systemd-logind[1580]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 01:00:09.752907 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 01:00:09.756994 coreos-metadata[1657]: Jan 23 01:00:09.756 INFO Fetch successful Jan 23 01:00:09.763155 unknown[1657]: wrote ssh authorized keys file for user: core Jan 23 01:00:09.817300 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 01:00:09.838133 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 01:00:09.851125 kernel: EXT4-fs (sda9): resized filesystem to 19393531 Jan 23 01:00:09.852496 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 01:00:09.859534 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.900439561Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.900488551Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.900504771Z" level=info msg="Start subscribing containerd event" Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.900524851Z" level=info msg="Start recovering state" Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.900608091Z" level=info msg="Start event monitor" Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.900617221Z" level=info msg="Start cni network conf syncer for default" Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.900623431Z" level=info msg="Start streaming server" Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.900630861Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.900636611Z" level=info msg="runtime interface starting up..." Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.900641681Z" level=info msg="starting plugins..." Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.900651931Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 01:00:09.903297 containerd[1610]: time="2026-01-23T01:00:09.902705683Z" level=info msg="containerd successfully booted in 0.243535s" Jan 23 01:00:09.859974 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 01:00:09.866992 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 01:00:09.868883 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 01:00:09.890507 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 01:00:09.892589 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 01:00:09.896983 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 01:00:09.897195 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 01:00:09.900909 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 01:00:09.906643 extend-filesystems[1600]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 23 01:00:09.906643 extend-filesystems[1600]: old_desc_blocks = 1, new_desc_blocks = 10 Jan 23 01:00:09.906643 extend-filesystems[1600]: The filesystem on /dev/sda9 is now 19393531 (4k) blocks long. Jan 23 01:00:09.911722 extend-filesystems[1566]: Resized filesystem in /dev/sda9 Jan 23 01:00:09.915663 update-ssh-keys[1682]: Updated "/home/core/.ssh/authorized_keys" Jan 23 01:00:09.908692 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 01:00:09.908998 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 01:00:09.917531 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 01:00:09.923819 systemd[1]: Finished sshkeys.service. Jan 23 01:00:10.033176 tar[1595]: linux-amd64/README.md Jan 23 01:00:10.054303 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 01:00:10.903998 systemd-networkd[1490]: eth0: Gained IPv6LL Jan 23 01:00:10.905203 systemd-timesyncd[1509]: Network configuration changed, trying to establish connection. Jan 23 01:00:10.908998 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 01:00:10.911199 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 01:00:10.918659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:10.924196 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 01:00:10.974199 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 01:00:11.224063 systemd-networkd[1490]: eth1: Gained IPv6LL Jan 23 01:00:11.226078 systemd-timesyncd[1509]: Network configuration changed, trying to establish connection. Jan 23 01:00:12.564162 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:12.566444 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 01:00:12.572674 systemd[1]: Startup finished in 2.947s (kernel) + 6.147s (initrd) + 5.824s (userspace) = 14.919s. Jan 23 01:00:12.577496 (kubelet)[1728]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:00:13.267488 kubelet[1728]: E0123 01:00:13.267414 1728 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:00:13.272113 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:00:13.272281 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:00:13.272607 systemd[1]: kubelet.service: Consumed 1.578s CPU time, 270.2M memory peak. Jan 23 01:00:15.537113 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 01:00:15.539578 systemd[1]: Started sshd@0-89.167.6.201:22-4.153.228.146:53414.service - OpenSSH per-connection server daemon (4.153.228.146:53414). Jan 23 01:00:16.347191 sshd[1741]: Accepted publickey for core from 4.153.228.146 port 53414 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:00:16.349510 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:16.358956 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 01:00:16.361234 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 01:00:16.373818 systemd-logind[1580]: New session 1 of user core. Jan 23 01:00:16.381910 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 01:00:16.385511 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 01:00:16.413224 (systemd)[1746]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 01:00:16.418604 systemd-logind[1580]: New session c1 of user core. Jan 23 01:00:16.587775 systemd[1746]: Queued start job for default target default.target. Jan 23 01:00:16.599956 systemd[1746]: Created slice app.slice - User Application Slice. Jan 23 01:00:16.599982 systemd[1746]: Reached target paths.target - Paths. Jan 23 01:00:16.600019 systemd[1746]: Reached target timers.target - Timers. Jan 23 01:00:16.601556 systemd[1746]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 01:00:16.619046 systemd[1746]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 01:00:16.619144 systemd[1746]: Reached target sockets.target - Sockets. Jan 23 01:00:16.619177 systemd[1746]: Reached target basic.target - Basic System. Jan 23 01:00:16.619212 systemd[1746]: Reached target default.target - Main User Target. Jan 23 01:00:16.619242 systemd[1746]: Startup finished in 188ms. Jan 23 01:00:16.619848 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 01:00:16.627915 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 01:00:17.184625 systemd[1]: Started sshd@1-89.167.6.201:22-4.153.228.146:53426.service - OpenSSH per-connection server daemon (4.153.228.146:53426). Jan 23 01:00:17.982881 sshd[1757]: Accepted publickey for core from 4.153.228.146 port 53426 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:00:17.985516 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:17.994825 systemd-logind[1580]: New session 2 of user core. Jan 23 01:00:18.004033 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 01:00:18.514837 sshd[1760]: Connection closed by 4.153.228.146 port 53426 Jan 23 01:00:18.517055 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:18.520622 systemd-logind[1580]: Session 2 logged out. Waiting for processes to exit. Jan 23 01:00:18.521473 systemd[1]: sshd@1-89.167.6.201:22-4.153.228.146:53426.service: Deactivated successfully. Jan 23 01:00:18.523169 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 01:00:18.524810 systemd-logind[1580]: Removed session 2. Jan 23 01:00:18.653421 systemd[1]: Started sshd@2-89.167.6.201:22-4.153.228.146:53442.service - OpenSSH per-connection server daemon (4.153.228.146:53442). Jan 23 01:00:19.439175 sshd[1766]: Accepted publickey for core from 4.153.228.146 port 53442 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:00:19.441785 sshd-session[1766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:19.451388 systemd-logind[1580]: New session 3 of user core. Jan 23 01:00:19.455991 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 01:00:19.962408 sshd[1769]: Connection closed by 4.153.228.146 port 53442 Jan 23 01:00:19.964093 sshd-session[1766]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:19.972356 systemd[1]: sshd@2-89.167.6.201:22-4.153.228.146:53442.service: Deactivated successfully. Jan 23 01:00:19.976391 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 01:00:19.978382 systemd-logind[1580]: Session 3 logged out. Waiting for processes to exit. Jan 23 01:00:19.981132 systemd-logind[1580]: Removed session 3. Jan 23 01:00:20.105221 systemd[1]: Started sshd@3-89.167.6.201:22-4.153.228.146:53444.service - OpenSSH per-connection server daemon (4.153.228.146:53444). Jan 23 01:00:20.889819 sshd[1775]: Accepted publickey for core from 4.153.228.146 port 53444 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:00:20.891008 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:20.897252 systemd-logind[1580]: New session 4 of user core. Jan 23 01:00:20.906100 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 01:00:21.415844 sshd[1778]: Connection closed by 4.153.228.146 port 53444 Jan 23 01:00:21.418068 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:21.426977 systemd[1]: sshd@3-89.167.6.201:22-4.153.228.146:53444.service: Deactivated successfully. Jan 23 01:00:21.431842 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 01:00:21.434805 systemd-logind[1580]: Session 4 logged out. Waiting for processes to exit. Jan 23 01:00:21.437017 systemd-logind[1580]: Removed session 4. Jan 23 01:00:21.551161 systemd[1]: Started sshd@4-89.167.6.201:22-4.153.228.146:53446.service - OpenSSH per-connection server daemon (4.153.228.146:53446). Jan 23 01:00:22.339939 sshd[1784]: Accepted publickey for core from 4.153.228.146 port 53446 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:00:22.342717 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:22.350863 systemd-logind[1580]: New session 5 of user core. Jan 23 01:00:22.357056 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 01:00:22.775614 sudo[1788]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 01:00:22.776560 sudo[1788]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:00:22.797251 sudo[1788]: pam_unix(sudo:session): session closed for user root Jan 23 01:00:22.921121 sshd[1787]: Connection closed by 4.153.228.146 port 53446 Jan 23 01:00:22.922717 sshd-session[1784]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:22.927160 systemd-logind[1580]: Session 5 logged out. Waiting for processes to exit. Jan 23 01:00:22.927294 systemd[1]: sshd@4-89.167.6.201:22-4.153.228.146:53446.service: Deactivated successfully. Jan 23 01:00:22.929502 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 01:00:22.931764 systemd-logind[1580]: Removed session 5. Jan 23 01:00:23.070240 systemd[1]: Started sshd@5-89.167.6.201:22-4.153.228.146:53452.service - OpenSSH per-connection server daemon (4.153.228.146:53452). Jan 23 01:00:23.522891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 01:00:23.525572 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:23.719117 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:23.726015 (kubelet)[1805]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:00:23.764919 kubelet[1805]: E0123 01:00:23.764853 1805 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:00:23.769580 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:00:23.769865 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:00:23.770529 systemd[1]: kubelet.service: Consumed 204ms CPU time, 108.3M memory peak. Jan 23 01:00:23.865125 sshd[1794]: Accepted publickey for core from 4.153.228.146 port 53452 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:00:23.867961 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:23.875842 systemd-logind[1580]: New session 6 of user core. Jan 23 01:00:23.894084 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 01:00:24.279349 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 01:00:24.280477 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:00:24.287147 sudo[1813]: pam_unix(sudo:session): session closed for user root Jan 23 01:00:24.293085 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 01:00:24.293339 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:00:24.306163 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 01:00:24.354396 augenrules[1835]: No rules Jan 23 01:00:24.356079 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 01:00:24.356425 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 01:00:24.357857 sudo[1812]: pam_unix(sudo:session): session closed for user root Jan 23 01:00:24.480523 sshd[1811]: Connection closed by 4.153.228.146 port 53452 Jan 23 01:00:24.482083 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Jan 23 01:00:24.489396 systemd[1]: sshd@5-89.167.6.201:22-4.153.228.146:53452.service: Deactivated successfully. Jan 23 01:00:24.493114 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 01:00:24.495924 systemd-logind[1580]: Session 6 logged out. Waiting for processes to exit. Jan 23 01:00:24.498230 systemd-logind[1580]: Removed session 6. Jan 23 01:00:24.616895 systemd[1]: Started sshd@6-89.167.6.201:22-4.153.228.146:39538.service - OpenSSH per-connection server daemon (4.153.228.146:39538). Jan 23 01:00:25.412211 sshd[1844]: Accepted publickey for core from 4.153.228.146 port 39538 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:00:25.413807 sshd-session[1844]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:00:25.419733 systemd-logind[1580]: New session 7 of user core. Jan 23 01:00:25.425070 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 01:00:25.825373 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 01:00:25.826081 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 01:00:26.214936 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 01:00:26.239680 (dockerd)[1865]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 01:00:26.507438 dockerd[1865]: time="2026-01-23T01:00:26.507152415Z" level=info msg="Starting up" Jan 23 01:00:26.510595 dockerd[1865]: time="2026-01-23T01:00:26.510552738Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 01:00:26.527364 dockerd[1865]: time="2026-01-23T01:00:26.527324162Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 01:00:26.570167 dockerd[1865]: time="2026-01-23T01:00:26.570131208Z" level=info msg="Loading containers: start." Jan 23 01:00:26.578779 kernel: Initializing XFRM netlink socket Jan 23 01:00:26.824084 systemd-timesyncd[1509]: Network configuration changed, trying to establish connection. Jan 23 01:00:26.852034 systemd-timesyncd[1509]: Contacted time server 78.46.204.247:123 (2.flatcar.pool.ntp.org). Jan 23 01:00:26.852087 systemd-timesyncd[1509]: Initial clock synchronization to Fri 2026-01-23 01:00:26.851281 UTC. Jan 23 01:00:26.861568 systemd-networkd[1490]: docker0: Link UP Jan 23 01:00:26.866053 dockerd[1865]: time="2026-01-23T01:00:26.866010254Z" level=info msg="Loading containers: done." Jan 23 01:00:26.880385 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3670948552-merged.mount: Deactivated successfully. Jan 23 01:00:26.885326 dockerd[1865]: time="2026-01-23T01:00:26.885281870Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 01:00:26.885444 dockerd[1865]: time="2026-01-23T01:00:26.885359650Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 01:00:26.885526 dockerd[1865]: time="2026-01-23T01:00:26.885472571Z" level=info msg="Initializing buildkit" Jan 23 01:00:26.912696 dockerd[1865]: time="2026-01-23T01:00:26.912544423Z" level=info msg="Completed buildkit initialization" Jan 23 01:00:26.918531 dockerd[1865]: time="2026-01-23T01:00:26.918510998Z" level=info msg="Daemon has completed initialization" Jan 23 01:00:26.918817 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 01:00:26.918926 dockerd[1865]: time="2026-01-23T01:00:26.918849428Z" level=info msg="API listen on /run/docker.sock" Jan 23 01:00:28.192416 containerd[1610]: time="2026-01-23T01:00:28.192363702Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 01:00:28.769031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2239123009.mount: Deactivated successfully. Jan 23 01:00:29.935207 containerd[1610]: time="2026-01-23T01:00:29.935132760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:29.936406 containerd[1610]: time="2026-01-23T01:00:29.936224038Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114812" Jan 23 01:00:29.939383 containerd[1610]: time="2026-01-23T01:00:29.939356576Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:29.942164 containerd[1610]: time="2026-01-23T01:00:29.942127425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:29.942673 containerd[1610]: time="2026-01-23T01:00:29.942646719Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 1.750238098s" Jan 23 01:00:29.942712 containerd[1610]: time="2026-01-23T01:00:29.942675428Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 23 01:00:29.943146 containerd[1610]: time="2026-01-23T01:00:29.943124526Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 01:00:31.177488 containerd[1610]: time="2026-01-23T01:00:31.177429901Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:31.178537 containerd[1610]: time="2026-01-23T01:00:31.178427316Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016803" Jan 23 01:00:31.179320 containerd[1610]: time="2026-01-23T01:00:31.179303103Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:31.181191 containerd[1610]: time="2026-01-23T01:00:31.181173925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:31.181718 containerd[1610]: time="2026-01-23T01:00:31.181703052Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.238557587s" Jan 23 01:00:31.181789 containerd[1610]: time="2026-01-23T01:00:31.181779100Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 23 01:00:31.182207 containerd[1610]: time="2026-01-23T01:00:31.182178589Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 01:00:32.350053 containerd[1610]: time="2026-01-23T01:00:32.349993120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:32.350988 containerd[1610]: time="2026-01-23T01:00:32.350964347Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158124" Jan 23 01:00:32.352548 containerd[1610]: time="2026-01-23T01:00:32.351476764Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:32.353403 containerd[1610]: time="2026-01-23T01:00:32.353380039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:32.354150 containerd[1610]: time="2026-01-23T01:00:32.354123751Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.171923962s" Jan 23 01:00:32.354187 containerd[1610]: time="2026-01-23T01:00:32.354153610Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 23 01:00:32.354921 containerd[1610]: time="2026-01-23T01:00:32.354908922Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 01:00:33.405723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2274921068.mount: Deactivated successfully. Jan 23 01:00:33.769082 containerd[1610]: time="2026-01-23T01:00:33.768939509Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:33.770006 containerd[1610]: time="2026-01-23T01:00:33.769965816Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930124" Jan 23 01:00:33.770829 containerd[1610]: time="2026-01-23T01:00:33.770767869Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:33.772193 containerd[1610]: time="2026-01-23T01:00:33.772171777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:33.772822 containerd[1610]: time="2026-01-23T01:00:33.772439391Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 1.41745894s" Jan 23 01:00:33.772822 containerd[1610]: time="2026-01-23T01:00:33.772468600Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 23 01:00:33.772931 containerd[1610]: time="2026-01-23T01:00:33.772905250Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 01:00:34.020665 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 01:00:34.024398 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:34.263882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1886601592.mount: Deactivated successfully. Jan 23 01:00:34.277958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:34.287152 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 01:00:34.342773 kubelet[2163]: E0123 01:00:34.342713 2163 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 01:00:34.347364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 01:00:34.347610 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 01:00:34.348077 systemd[1]: kubelet.service: Consumed 250ms CPU time, 110.1M memory peak. Jan 23 01:00:35.480444 containerd[1610]: time="2026-01-23T01:00:35.480354388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:35.481960 containerd[1610]: time="2026-01-23T01:00:35.481693791Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942332" Jan 23 01:00:35.482983 containerd[1610]: time="2026-01-23T01:00:35.482956117Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:35.485523 containerd[1610]: time="2026-01-23T01:00:35.485487557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:35.486327 containerd[1610]: time="2026-01-23T01:00:35.486130485Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.713199886s" Jan 23 01:00:35.486327 containerd[1610]: time="2026-01-23T01:00:35.486161804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 23 01:00:35.487088 containerd[1610]: time="2026-01-23T01:00:35.487019636Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 01:00:35.938241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2200278850.mount: Deactivated successfully. Jan 23 01:00:35.946073 containerd[1610]: time="2026-01-23T01:00:35.945997620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:00:35.947140 containerd[1610]: time="2026-01-23T01:00:35.947087959Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321160" Jan 23 01:00:35.948248 containerd[1610]: time="2026-01-23T01:00:35.948162758Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:00:35.950388 containerd[1610]: time="2026-01-23T01:00:35.950318075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 01:00:35.951626 containerd[1610]: time="2026-01-23T01:00:35.950924103Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 463.874908ms" Jan 23 01:00:35.951626 containerd[1610]: time="2026-01-23T01:00:35.950960273Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 23 01:00:35.951940 containerd[1610]: time="2026-01-23T01:00:35.951896614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 01:00:36.453267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount621501369.mount: Deactivated successfully. Jan 23 01:00:38.166710 containerd[1610]: time="2026-01-23T01:00:38.166658925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:38.168271 containerd[1610]: time="2026-01-23T01:00:38.168090311Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926291" Jan 23 01:00:38.169199 containerd[1610]: time="2026-01-23T01:00:38.169172934Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:38.171756 containerd[1610]: time="2026-01-23T01:00:38.171712953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:38.172489 containerd[1610]: time="2026-01-23T01:00:38.172469082Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.220546518s" Jan 23 01:00:38.172562 containerd[1610]: time="2026-01-23T01:00:38.172548280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 23 01:00:41.931470 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:41.931784 systemd[1]: kubelet.service: Consumed 250ms CPU time, 110.1M memory peak. Jan 23 01:00:41.935481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:41.980633 systemd[1]: Reload requested from client PID 2305 ('systemctl') (unit session-7.scope)... Jan 23 01:00:41.980866 systemd[1]: Reloading... Jan 23 01:00:42.103796 zram_generator::config[2349]: No configuration found. Jan 23 01:00:42.292950 systemd[1]: Reloading finished in 311 ms. Jan 23 01:00:42.344547 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 01:00:42.344636 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 01:00:42.344969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:42.345021 systemd[1]: kubelet.service: Consumed 122ms CPU time, 98.3M memory peak. Jan 23 01:00:42.346973 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:42.527542 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:42.535139 (kubelet)[2401]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:00:42.569400 kubelet[2401]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:00:42.569400 kubelet[2401]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:00:42.569400 kubelet[2401]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:00:42.569400 kubelet[2401]: I0123 01:00:42.569311 2401 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:00:42.993008 kubelet[2401]: I0123 01:00:42.991856 2401 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:00:42.993008 kubelet[2401]: I0123 01:00:42.991878 2401 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:00:42.993008 kubelet[2401]: I0123 01:00:42.992141 2401 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:00:43.014904 kubelet[2401]: E0123 01:00:43.014863 2401 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://89.167.6.201:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 89.167.6.201:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 01:00:43.016852 kubelet[2401]: I0123 01:00:43.016813 2401 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:00:43.025624 kubelet[2401]: I0123 01:00:43.025579 2401 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:00:43.029026 kubelet[2401]: I0123 01:00:43.029012 2401 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:00:43.029241 kubelet[2401]: I0123 01:00:43.029196 2401 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:00:43.029348 kubelet[2401]: I0123 01:00:43.029217 2401 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-n-ad7bb25bb9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:00:43.029348 kubelet[2401]: I0123 01:00:43.029334 2401 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:00:43.029348 kubelet[2401]: I0123 01:00:43.029340 2401 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:00:43.029659 kubelet[2401]: I0123 01:00:43.029436 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:00:43.031314 kubelet[2401]: I0123 01:00:43.031297 2401 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:00:43.031314 kubelet[2401]: I0123 01:00:43.031316 2401 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:00:43.031549 kubelet[2401]: I0123 01:00:43.031341 2401 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:00:43.031549 kubelet[2401]: I0123 01:00:43.031354 2401 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:00:43.040665 kubelet[2401]: E0123 01:00:43.040212 2401 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://89.167.6.201:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 89.167.6.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 01:00:43.040665 kubelet[2401]: E0123 01:00:43.040333 2401 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://89.167.6.201:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-2-n-ad7bb25bb9&limit=500&resourceVersion=0\": dial tcp 89.167.6.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 01:00:43.040665 kubelet[2401]: I0123 01:00:43.040448 2401 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:00:43.041491 kubelet[2401]: I0123 01:00:43.041464 2401 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:00:43.043988 kubelet[2401]: W0123 01:00:43.043044 2401 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 01:00:43.047265 kubelet[2401]: I0123 01:00:43.047243 2401 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:00:43.047432 kubelet[2401]: I0123 01:00:43.047412 2401 server.go:1289] "Started kubelet" Jan 23 01:00:43.051270 kubelet[2401]: I0123 01:00:43.050845 2401 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:00:43.053785 kubelet[2401]: E0123 01:00:43.052592 2401 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://89.167.6.201:6443/api/v1/namespaces/default/events\": dial tcp 89.167.6.201:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-2-n-ad7bb25bb9.188d3666a14de5b7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-2-n-ad7bb25bb9,UID:ci-4459-2-2-n-ad7bb25bb9,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-2-n-ad7bb25bb9,},FirstTimestamp:2026-01-23 01:00:43.047273911 +0000 UTC m=+0.508149209,LastTimestamp:2026-01-23 01:00:43.047273911 +0000 UTC m=+0.508149209,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-n-ad7bb25bb9,}" Jan 23 01:00:43.053785 kubelet[2401]: I0123 01:00:43.053624 2401 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:00:43.054425 kubelet[2401]: I0123 01:00:43.054295 2401 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:00:43.055335 kubelet[2401]: I0123 01:00:43.055324 2401 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:00:43.056639 kubelet[2401]: E0123 01:00:43.056582 2401 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://89.167.6.201:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 89.167.6.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:00:43.056639 kubelet[2401]: E0123 01:00:43.054895 2401 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-2-n-ad7bb25bb9\" not found" Jan 23 01:00:43.056878 kubelet[2401]: E0123 01:00:43.056649 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.6.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-ad7bb25bb9?timeout=10s\": dial tcp 89.167.6.201:6443: connect: connection refused" interval="200ms" Jan 23 01:00:43.057597 kubelet[2401]: I0123 01:00:43.054687 2401 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:00:43.058865 kubelet[2401]: I0123 01:00:43.058359 2401 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:00:43.058865 kubelet[2401]: I0123 01:00:43.058416 2401 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:00:43.063589 kubelet[2401]: I0123 01:00:43.063538 2401 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:00:43.068797 kubelet[2401]: I0123 01:00:43.054680 2401 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:00:43.068797 kubelet[2401]: I0123 01:00:43.068166 2401 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:00:43.068797 kubelet[2401]: I0123 01:00:43.068659 2401 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:00:43.068982 kubelet[2401]: I0123 01:00:43.068845 2401 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:00:43.075463 kubelet[2401]: E0123 01:00:43.075435 2401 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:00:43.082672 kubelet[2401]: I0123 01:00:43.082640 2401 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:00:43.082672 kubelet[2401]: I0123 01:00:43.082653 2401 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:00:43.082672 kubelet[2401]: I0123 01:00:43.082665 2401 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:00:43.085790 kubelet[2401]: I0123 01:00:43.085658 2401 policy_none.go:49] "None policy: Start" Jan 23 01:00:43.085790 kubelet[2401]: I0123 01:00:43.085678 2401 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:00:43.085790 kubelet[2401]: I0123 01:00:43.085697 2401 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:00:43.095540 kubelet[2401]: I0123 01:00:43.095311 2401 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:00:43.099640 kubelet[2401]: I0123 01:00:43.099618 2401 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:00:43.099773 kubelet[2401]: I0123 01:00:43.099728 2401 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:00:43.099820 kubelet[2401]: I0123 01:00:43.099813 2401 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:00:43.099847 kubelet[2401]: I0123 01:00:43.099842 2401 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:00:43.099964 kubelet[2401]: E0123 01:00:43.099917 2401 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:00:43.103103 kubelet[2401]: E0123 01:00:43.103087 2401 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://89.167.6.201:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 89.167.6.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 01:00:43.103686 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 01:00:43.114803 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 01:00:43.118901 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 01:00:43.131581 kubelet[2401]: E0123 01:00:43.130978 2401 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:00:43.131581 kubelet[2401]: I0123 01:00:43.131129 2401 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:00:43.131581 kubelet[2401]: I0123 01:00:43.131136 2401 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:00:43.131581 kubelet[2401]: I0123 01:00:43.131525 2401 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:00:43.132654 kubelet[2401]: E0123 01:00:43.132643 2401 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:00:43.132770 kubelet[2401]: E0123 01:00:43.132763 2401 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-2-n-ad7bb25bb9\" not found" Jan 23 01:00:43.219606 systemd[1]: Created slice kubepods-burstable-podd37104b0de10368bf07682c7dbde24fa.slice - libcontainer container kubepods-burstable-podd37104b0de10368bf07682c7dbde24fa.slice. Jan 23 01:00:43.233855 kubelet[2401]: I0123 01:00:43.233795 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.234327 kubelet[2401]: E0123 01:00:43.234291 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.6.201:6443/api/v1/nodes\": dial tcp 89.167.6.201:6443: connect: connection refused" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.236495 kubelet[2401]: E0123 01:00:43.236175 2401 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-ad7bb25bb9\" not found" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.242174 systemd[1]: Created slice kubepods-burstable-podd62fa953daf4711415e8fd5ad0c6ddeb.slice - libcontainer container kubepods-burstable-podd62fa953daf4711415e8fd5ad0c6ddeb.slice. Jan 23 01:00:43.247093 kubelet[2401]: E0123 01:00:43.246982 2401 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-ad7bb25bb9\" not found" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.252959 systemd[1]: Created slice kubepods-burstable-poda6f9af4f373aa4524a650815b1a0a6d0.slice - libcontainer container kubepods-burstable-poda6f9af4f373aa4524a650815b1a0a6d0.slice. Jan 23 01:00:43.256524 kubelet[2401]: E0123 01:00:43.256195 2401 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-ad7bb25bb9\" not found" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.257526 kubelet[2401]: E0123 01:00:43.257486 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.6.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-ad7bb25bb9?timeout=10s\": dial tcp 89.167.6.201:6443: connect: connection refused" interval="400ms" Jan 23 01:00:43.270076 kubelet[2401]: I0123 01:00:43.270045 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d62fa953daf4711415e8fd5ad0c6ddeb-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d62fa953daf4711415e8fd5ad0c6ddeb\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.270175 kubelet[2401]: I0123 01:00:43.270088 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d62fa953daf4711415e8fd5ad0c6ddeb-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d62fa953daf4711415e8fd5ad0c6ddeb\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.270175 kubelet[2401]: I0123 01:00:43.270119 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d62fa953daf4711415e8fd5ad0c6ddeb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d62fa953daf4711415e8fd5ad0c6ddeb\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.270175 kubelet[2401]: I0123 01:00:43.270145 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d37104b0de10368bf07682c7dbde24fa-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d37104b0de10368bf07682c7dbde24fa\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.270175 kubelet[2401]: I0123 01:00:43.270171 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d62fa953daf4711415e8fd5ad0c6ddeb-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d62fa953daf4711415e8fd5ad0c6ddeb\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.270352 kubelet[2401]: I0123 01:00:43.270193 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d62fa953daf4711415e8fd5ad0c6ddeb-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d62fa953daf4711415e8fd5ad0c6ddeb\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.270352 kubelet[2401]: I0123 01:00:43.270216 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6f9af4f373aa4524a650815b1a0a6d0-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"a6f9af4f373aa4524a650815b1a0a6d0\") " pod="kube-system/kube-scheduler-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.270352 kubelet[2401]: I0123 01:00:43.270238 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d37104b0de10368bf07682c7dbde24fa-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d37104b0de10368bf07682c7dbde24fa\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.270352 kubelet[2401]: I0123 01:00:43.270266 2401 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d37104b0de10368bf07682c7dbde24fa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d37104b0de10368bf07682c7dbde24fa\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.437339 kubelet[2401]: I0123 01:00:43.437239 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.437902 kubelet[2401]: E0123 01:00:43.437836 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.6.201:6443/api/v1/nodes\": dial tcp 89.167.6.201:6443: connect: connection refused" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.538276 containerd[1610]: time="2026-01-23T01:00:43.538072863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-n-ad7bb25bb9,Uid:d37104b0de10368bf07682c7dbde24fa,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:43.549510 containerd[1610]: time="2026-01-23T01:00:43.549469343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9,Uid:d62fa953daf4711415e8fd5ad0c6ddeb,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:43.573418 containerd[1610]: time="2026-01-23T01:00:43.573247022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-n-ad7bb25bb9,Uid:a6f9af4f373aa4524a650815b1a0a6d0,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:43.580015 containerd[1610]: time="2026-01-23T01:00:43.579960395Z" level=info msg="connecting to shim 8c14a50f2fdeb2b76a79b28ca4361a960a32b3cbf1ab77100f37e7025ed0f27e" address="unix:///run/containerd/s/bef229f572478031b5345bf26b67d727027689fbf5c933cbd8424cf5b37f066f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:43.595501 containerd[1610]: time="2026-01-23T01:00:43.595423659Z" level=info msg="connecting to shim 4407e0a495e9331d9c587fed6ff7b9af7a77ed692297976e3cf5e24c581adcc8" address="unix:///run/containerd/s/11aecad3be69b1835c7e854c187d6f2b6ae065c12737c5a502ce2c350caed745" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:43.622471 containerd[1610]: time="2026-01-23T01:00:43.622414043Z" level=info msg="connecting to shim 859a7f091858d387e7ec3a828183114f724aa23633d98067d4f9b5dd22bd7315" address="unix:///run/containerd/s/768f2145c617b0123eaf6b9925916cfec6200cc38a44001b18ae6914dd44fbb2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:43.632932 systemd[1]: Started cri-containerd-4407e0a495e9331d9c587fed6ff7b9af7a77ed692297976e3cf5e24c581adcc8.scope - libcontainer container 4407e0a495e9331d9c587fed6ff7b9af7a77ed692297976e3cf5e24c581adcc8. Jan 23 01:00:43.637058 systemd[1]: Started cri-containerd-8c14a50f2fdeb2b76a79b28ca4361a960a32b3cbf1ab77100f37e7025ed0f27e.scope - libcontainer container 8c14a50f2fdeb2b76a79b28ca4361a960a32b3cbf1ab77100f37e7025ed0f27e. Jan 23 01:00:43.658701 kubelet[2401]: E0123 01:00:43.658516 2401 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://89.167.6.201:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-2-n-ad7bb25bb9?timeout=10s\": dial tcp 89.167.6.201:6443: connect: connection refused" interval="800ms" Jan 23 01:00:43.659967 systemd[1]: Started cri-containerd-859a7f091858d387e7ec3a828183114f724aa23633d98067d4f9b5dd22bd7315.scope - libcontainer container 859a7f091858d387e7ec3a828183114f724aa23633d98067d4f9b5dd22bd7315. Jan 23 01:00:43.719064 containerd[1610]: time="2026-01-23T01:00:43.719035962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-2-n-ad7bb25bb9,Uid:d37104b0de10368bf07682c7dbde24fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"8c14a50f2fdeb2b76a79b28ca4361a960a32b3cbf1ab77100f37e7025ed0f27e\"" Jan 23 01:00:43.721982 containerd[1610]: time="2026-01-23T01:00:43.721958519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9,Uid:d62fa953daf4711415e8fd5ad0c6ddeb,Namespace:kube-system,Attempt:0,} returns sandbox id \"4407e0a495e9331d9c587fed6ff7b9af7a77ed692297976e3cf5e24c581adcc8\"" Jan 23 01:00:43.725089 containerd[1610]: time="2026-01-23T01:00:43.725071003Z" level=info msg="CreateContainer within sandbox \"8c14a50f2fdeb2b76a79b28ca4361a960a32b3cbf1ab77100f37e7025ed0f27e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 01:00:43.731441 containerd[1610]: time="2026-01-23T01:00:43.731423181Z" level=info msg="Container 3172e1e512241a340a77d8e84684918a89e742da01caf92614a67f63da0c44fd: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:43.745486 containerd[1610]: time="2026-01-23T01:00:43.745451681Z" level=info msg="CreateContainer within sandbox \"4407e0a495e9331d9c587fed6ff7b9af7a77ed692297976e3cf5e24c581adcc8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 01:00:43.756154 containerd[1610]: time="2026-01-23T01:00:43.755995901Z" level=info msg="CreateContainer within sandbox \"8c14a50f2fdeb2b76a79b28ca4361a960a32b3cbf1ab77100f37e7025ed0f27e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3172e1e512241a340a77d8e84684918a89e742da01caf92614a67f63da0c44fd\"" Jan 23 01:00:43.757270 containerd[1610]: time="2026-01-23T01:00:43.757245417Z" level=info msg="StartContainer for \"3172e1e512241a340a77d8e84684918a89e742da01caf92614a67f63da0c44fd\"" Jan 23 01:00:43.757872 containerd[1610]: time="2026-01-23T01:00:43.757850351Z" level=info msg="Container d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:43.759076 containerd[1610]: time="2026-01-23T01:00:43.759045087Z" level=info msg="connecting to shim 3172e1e512241a340a77d8e84684918a89e742da01caf92614a67f63da0c44fd" address="unix:///run/containerd/s/bef229f572478031b5345bf26b67d727027689fbf5c933cbd8424cf5b37f066f" protocol=ttrpc version=3 Jan 23 01:00:43.760529 containerd[1610]: time="2026-01-23T01:00:43.760506310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-2-n-ad7bb25bb9,Uid:a6f9af4f373aa4524a650815b1a0a6d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"859a7f091858d387e7ec3a828183114f724aa23633d98067d4f9b5dd22bd7315\"" Jan 23 01:00:43.764432 containerd[1610]: time="2026-01-23T01:00:43.764408225Z" level=info msg="CreateContainer within sandbox \"859a7f091858d387e7ec3a828183114f724aa23633d98067d4f9b5dd22bd7315\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 01:00:43.766637 containerd[1610]: time="2026-01-23T01:00:43.766600731Z" level=info msg="CreateContainer within sandbox \"4407e0a495e9331d9c587fed6ff7b9af7a77ed692297976e3cf5e24c581adcc8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729\"" Jan 23 01:00:43.767241 containerd[1610]: time="2026-01-23T01:00:43.766982106Z" level=info msg="StartContainer for \"d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729\"" Jan 23 01:00:43.768360 containerd[1610]: time="2026-01-23T01:00:43.768344320Z" level=info msg="connecting to shim d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729" address="unix:///run/containerd/s/11aecad3be69b1835c7e854c187d6f2b6ae065c12737c5a502ce2c350caed745" protocol=ttrpc version=3 Jan 23 01:00:43.773808 containerd[1610]: time="2026-01-23T01:00:43.773763819Z" level=info msg="Container f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:43.777871 systemd[1]: Started cri-containerd-3172e1e512241a340a77d8e84684918a89e742da01caf92614a67f63da0c44fd.scope - libcontainer container 3172e1e512241a340a77d8e84684918a89e742da01caf92614a67f63da0c44fd. Jan 23 01:00:43.781759 containerd[1610]: time="2026-01-23T01:00:43.781410892Z" level=info msg="CreateContainer within sandbox \"859a7f091858d387e7ec3a828183114f724aa23633d98067d4f9b5dd22bd7315\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003\"" Jan 23 01:00:43.782453 containerd[1610]: time="2026-01-23T01:00:43.781910477Z" level=info msg="StartContainer for \"f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003\"" Jan 23 01:00:43.784068 containerd[1610]: time="2026-01-23T01:00:43.784028702Z" level=info msg="connecting to shim f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003" address="unix:///run/containerd/s/768f2145c617b0123eaf6b9925916cfec6200cc38a44001b18ae6914dd44fbb2" protocol=ttrpc version=3 Jan 23 01:00:43.786134 systemd[1]: Started cri-containerd-d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729.scope - libcontainer container d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729. Jan 23 01:00:43.809849 systemd[1]: Started cri-containerd-f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003.scope - libcontainer container f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003. Jan 23 01:00:43.840574 kubelet[2401]: I0123 01:00:43.840533 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.841471 kubelet[2401]: E0123 01:00:43.841199 2401 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://89.167.6.201:6443/api/v1/nodes\": dial tcp 89.167.6.201:6443: connect: connection refused" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:43.847339 containerd[1610]: time="2026-01-23T01:00:43.847310471Z" level=info msg="StartContainer for \"3172e1e512241a340a77d8e84684918a89e742da01caf92614a67f63da0c44fd\" returns successfully" Jan 23 01:00:43.866448 containerd[1610]: time="2026-01-23T01:00:43.866388204Z" level=info msg="StartContainer for \"d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729\" returns successfully" Jan 23 01:00:43.873114 kubelet[2401]: E0123 01:00:43.873032 2401 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://89.167.6.201:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 89.167.6.201:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 01:00:43.886332 containerd[1610]: time="2026-01-23T01:00:43.886288778Z" level=info msg="StartContainer for \"f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003\" returns successfully" Jan 23 01:00:44.115049 kubelet[2401]: E0123 01:00:44.114963 2401 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-ad7bb25bb9\" not found" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:44.115928 kubelet[2401]: E0123 01:00:44.115910 2401 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-ad7bb25bb9\" not found" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:44.117763 kubelet[2401]: E0123 01:00:44.117583 2401 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-2-n-ad7bb25bb9\" not found" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:44.644299 kubelet[2401]: I0123 01:00:44.644269 2401 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:44.759866 kubelet[2401]: E0123 01:00:44.759825 2401 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-2-n-ad7bb25bb9\" not found" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:44.906423 kubelet[2401]: I0123 01:00:44.906158 2401 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:44.956960 kubelet[2401]: I0123 01:00:44.956924 2401 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:44.965274 kubelet[2401]: E0123 01:00:44.965236 2401 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-n-ad7bb25bb9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:44.965356 kubelet[2401]: I0123 01:00:44.965320 2401 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:44.966736 kubelet[2401]: E0123 01:00:44.966698 2401 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:44.966736 kubelet[2401]: I0123 01:00:44.966734 2401 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:44.969831 kubelet[2401]: E0123 01:00:44.969796 2401 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-n-ad7bb25bb9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:45.036187 kubelet[2401]: I0123 01:00:45.036155 2401 apiserver.go:52] "Watching apiserver" Jan 23 01:00:45.058761 kubelet[2401]: I0123 01:00:45.058445 2401 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:00:45.118819 kubelet[2401]: I0123 01:00:45.118774 2401 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:45.119359 kubelet[2401]: I0123 01:00:45.119331 2401 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:45.121334 kubelet[2401]: E0123 01:00:45.121287 2401 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-n-ad7bb25bb9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:45.121904 kubelet[2401]: E0123 01:00:45.121869 2401 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-n-ad7bb25bb9\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:47.012110 systemd[1]: Reload requested from client PID 2683 ('systemctl') (unit session-7.scope)... Jan 23 01:00:47.012126 systemd[1]: Reloading... Jan 23 01:00:47.106796 zram_generator::config[2727]: No configuration found. Jan 23 01:00:47.332435 systemd[1]: Reloading finished in 319 ms. Jan 23 01:00:47.367624 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:47.381978 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 01:00:47.382316 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:47.382385 systemd[1]: kubelet.service: Consumed 894ms CPU time, 129M memory peak. Jan 23 01:00:47.384376 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 01:00:47.564588 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 01:00:47.573310 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 01:00:47.628492 kubelet[2778]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:00:47.628838 kubelet[2778]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 01:00:47.628869 kubelet[2778]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 01:00:47.628973 kubelet[2778]: I0123 01:00:47.628953 2778 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 01:00:47.635060 kubelet[2778]: I0123 01:00:47.635042 2778 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 01:00:47.635140 kubelet[2778]: I0123 01:00:47.635134 2778 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 01:00:47.635420 kubelet[2778]: I0123 01:00:47.635412 2778 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 01:00:47.638425 kubelet[2778]: I0123 01:00:47.638410 2778 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 01:00:47.641317 kubelet[2778]: I0123 01:00:47.641305 2778 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 01:00:47.651856 kubelet[2778]: I0123 01:00:47.650617 2778 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 01:00:47.656155 kubelet[2778]: I0123 01:00:47.656139 2778 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 01:00:47.656518 kubelet[2778]: I0123 01:00:47.656502 2778 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 01:00:47.656718 kubelet[2778]: I0123 01:00:47.656578 2778 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-2-n-ad7bb25bb9","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 01:00:47.656851 kubelet[2778]: I0123 01:00:47.656842 2778 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 01:00:47.656939 kubelet[2778]: I0123 01:00:47.656932 2778 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 01:00:47.657006 kubelet[2778]: I0123 01:00:47.656992 2778 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:00:47.657212 kubelet[2778]: I0123 01:00:47.657205 2778 kubelet.go:480] "Attempting to sync node with API server" Jan 23 01:00:47.659101 kubelet[2778]: I0123 01:00:47.659089 2778 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 01:00:47.659186 kubelet[2778]: I0123 01:00:47.659179 2778 kubelet.go:386] "Adding apiserver pod source" Jan 23 01:00:47.659256 kubelet[2778]: I0123 01:00:47.659250 2778 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 01:00:47.663319 kubelet[2778]: I0123 01:00:47.663307 2778 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 01:00:47.664048 kubelet[2778]: I0123 01:00:47.664037 2778 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 01:00:47.680146 kubelet[2778]: I0123 01:00:47.680117 2778 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 01:00:47.680329 kubelet[2778]: I0123 01:00:47.680307 2778 server.go:1289] "Started kubelet" Jan 23 01:00:47.687803 kubelet[2778]: I0123 01:00:47.686106 2778 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 01:00:47.688497 kubelet[2778]: I0123 01:00:47.688476 2778 server.go:317] "Adding debug handlers to kubelet server" Jan 23 01:00:47.691101 kubelet[2778]: I0123 01:00:47.691055 2778 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 01:00:47.693761 kubelet[2778]: I0123 01:00:47.691773 2778 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 01:00:47.700231 kubelet[2778]: I0123 01:00:47.699648 2778 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 01:00:47.700395 kubelet[2778]: I0123 01:00:47.700385 2778 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 01:00:47.701043 kubelet[2778]: I0123 01:00:47.701034 2778 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 01:00:47.702609 kubelet[2778]: I0123 01:00:47.702591 2778 factory.go:223] Registration of the systemd container factory successfully Jan 23 01:00:47.703853 kubelet[2778]: I0123 01:00:47.703834 2778 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 01:00:47.704152 kubelet[2778]: I0123 01:00:47.704133 2778 reconciler.go:26] "Reconciler: start to sync state" Jan 23 01:00:47.705329 kubelet[2778]: I0123 01:00:47.704547 2778 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 01:00:47.708072 kubelet[2778]: E0123 01:00:47.708060 2778 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 01:00:47.709021 kubelet[2778]: I0123 01:00:47.708994 2778 factory.go:223] Registration of the containerd container factory successfully Jan 23 01:00:47.729455 kubelet[2778]: I0123 01:00:47.729433 2778 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 01:00:47.730758 kubelet[2778]: I0123 01:00:47.730715 2778 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 01:00:47.730758 kubelet[2778]: I0123 01:00:47.730729 2778 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 01:00:47.730850 kubelet[2778]: I0123 01:00:47.730843 2778 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 01:00:47.730926 kubelet[2778]: I0123 01:00:47.730897 2778 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 01:00:47.731150 kubelet[2778]: E0123 01:00:47.731030 2778 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 01:00:47.760300 kubelet[2778]: I0123 01:00:47.760279 2778 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 01:00:47.760460 kubelet[2778]: I0123 01:00:47.760436 2778 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 01:00:47.760771 kubelet[2778]: I0123 01:00:47.760499 2778 state_mem.go:36] "Initialized new in-memory state store" Jan 23 01:00:47.760771 kubelet[2778]: I0123 01:00:47.760595 2778 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 01:00:47.760771 kubelet[2778]: I0123 01:00:47.760602 2778 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 01:00:47.760771 kubelet[2778]: I0123 01:00:47.760615 2778 policy_none.go:49] "None policy: Start" Jan 23 01:00:47.760771 kubelet[2778]: I0123 01:00:47.760623 2778 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 01:00:47.760771 kubelet[2778]: I0123 01:00:47.760631 2778 state_mem.go:35] "Initializing new in-memory state store" Jan 23 01:00:47.760771 kubelet[2778]: I0123 01:00:47.760707 2778 state_mem.go:75] "Updated machine memory state" Jan 23 01:00:47.764944 kubelet[2778]: E0123 01:00:47.764920 2778 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 01:00:47.765440 kubelet[2778]: I0123 01:00:47.765426 2778 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 01:00:47.765513 kubelet[2778]: I0123 01:00:47.765493 2778 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 01:00:47.765722 kubelet[2778]: I0123 01:00:47.765712 2778 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 01:00:47.767271 kubelet[2778]: E0123 01:00:47.767249 2778 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 01:00:47.832888 kubelet[2778]: I0123 01:00:47.832272 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:47.832888 kubelet[2778]: I0123 01:00:47.832576 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:47.833166 kubelet[2778]: I0123 01:00:47.833148 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:47.873664 kubelet[2778]: I0123 01:00:47.873586 2778 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:47.881337 kubelet[2778]: I0123 01:00:47.881193 2778 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:47.881337 kubelet[2778]: I0123 01:00:47.881278 2778 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.004849 kubelet[2778]: I0123 01:00:48.004797 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d37104b0de10368bf07682c7dbde24fa-k8s-certs\") pod \"kube-apiserver-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d37104b0de10368bf07682c7dbde24fa\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.004849 kubelet[2778]: I0123 01:00:48.004849 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d37104b0de10368bf07682c7dbde24fa-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d37104b0de10368bf07682c7dbde24fa\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.005055 kubelet[2778]: I0123 01:00:48.004879 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d62fa953daf4711415e8fd5ad0c6ddeb-ca-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d62fa953daf4711415e8fd5ad0c6ddeb\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.005055 kubelet[2778]: I0123 01:00:48.004907 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d62fa953daf4711415e8fd5ad0c6ddeb-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d62fa953daf4711415e8fd5ad0c6ddeb\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.005055 kubelet[2778]: I0123 01:00:48.004930 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a6f9af4f373aa4524a650815b1a0a6d0-kubeconfig\") pod \"kube-scheduler-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"a6f9af4f373aa4524a650815b1a0a6d0\") " pod="kube-system/kube-scheduler-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.005055 kubelet[2778]: I0123 01:00:48.004954 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d62fa953daf4711415e8fd5ad0c6ddeb-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d62fa953daf4711415e8fd5ad0c6ddeb\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.005055 kubelet[2778]: I0123 01:00:48.004979 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d62fa953daf4711415e8fd5ad0c6ddeb-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d62fa953daf4711415e8fd5ad0c6ddeb\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.005426 kubelet[2778]: I0123 01:00:48.005002 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d62fa953daf4711415e8fd5ad0c6ddeb-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d62fa953daf4711415e8fd5ad0c6ddeb\") " pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.005426 kubelet[2778]: I0123 01:00:48.005024 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d37104b0de10368bf07682c7dbde24fa-ca-certs\") pod \"kube-apiserver-ci-4459-2-2-n-ad7bb25bb9\" (UID: \"d37104b0de10368bf07682c7dbde24fa\") " pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.663099 kubelet[2778]: I0123 01:00:48.663037 2778 apiserver.go:52] "Watching apiserver" Jan 23 01:00:48.705564 kubelet[2778]: I0123 01:00:48.704795 2778 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 01:00:48.753197 kubelet[2778]: I0123 01:00:48.753149 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.756027 kubelet[2778]: I0123 01:00:48.755993 2778 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.772217 kubelet[2778]: E0123 01:00:48.772163 2778 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-2-n-ad7bb25bb9\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.777309 kubelet[2778]: E0123 01:00:48.777056 2778 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-2-n-ad7bb25bb9\" already exists" pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:00:48.810574 kubelet[2778]: I0123 01:00:48.810115 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-2-n-ad7bb25bb9" podStartSLOduration=1.8100922640000001 podStartE2EDuration="1.810092264s" podCreationTimestamp="2026-01-23 01:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:00:48.797049239 +0000 UTC m=+1.213682645" watchObservedRunningTime="2026-01-23 01:00:48.810092264 +0000 UTC m=+1.226725690" Jan 23 01:00:48.831280 kubelet[2778]: I0123 01:00:48.831155 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-2-n-ad7bb25bb9" podStartSLOduration=1.83052547 podStartE2EDuration="1.83052547s" podCreationTimestamp="2026-01-23 01:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:00:48.812934531 +0000 UTC m=+1.229567947" watchObservedRunningTime="2026-01-23 01:00:48.83052547 +0000 UTC m=+1.247158886" Jan 23 01:00:48.852739 kubelet[2778]: I0123 01:00:48.852435 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-2-n-ad7bb25bb9" podStartSLOduration=1.852404725 podStartE2EDuration="1.852404725s" podCreationTimestamp="2026-01-23 01:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:00:48.832318636 +0000 UTC m=+1.248952052" watchObservedRunningTime="2026-01-23 01:00:48.852404725 +0000 UTC m=+1.269038151" Jan 23 01:00:53.116496 kubelet[2778]: I0123 01:00:53.116410 2778 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 01:00:53.120054 kubelet[2778]: I0123 01:00:53.119236 2778 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 01:00:53.120112 containerd[1610]: time="2026-01-23T01:00:53.118873739Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 01:00:53.357448 systemd[1]: Created slice kubepods-besteffort-pod6ffbacbc_8b50_4e9d_9cd7_e8485a09f0c4.slice - libcontainer container kubepods-besteffort-pod6ffbacbc_8b50_4e9d_9cd7_e8485a09f0c4.slice. Jan 23 01:00:53.440568 kubelet[2778]: I0123 01:00:53.440172 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4-kube-proxy\") pod \"kube-proxy-6vvxx\" (UID: \"6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4\") " pod="kube-system/kube-proxy-6vvxx" Jan 23 01:00:53.440568 kubelet[2778]: I0123 01:00:53.440391 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4-xtables-lock\") pod \"kube-proxy-6vvxx\" (UID: \"6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4\") " pod="kube-system/kube-proxy-6vvxx" Jan 23 01:00:53.440568 kubelet[2778]: I0123 01:00:53.440418 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4-lib-modules\") pod \"kube-proxy-6vvxx\" (UID: \"6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4\") " pod="kube-system/kube-proxy-6vvxx" Jan 23 01:00:53.440568 kubelet[2778]: I0123 01:00:53.440437 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh66l\" (UniqueName: \"kubernetes.io/projected/6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4-kube-api-access-hh66l\") pod \"kube-proxy-6vvxx\" (UID: \"6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4\") " pod="kube-system/kube-proxy-6vvxx" Jan 23 01:00:53.551198 kubelet[2778]: E0123 01:00:53.551115 2778 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 23 01:00:53.551198 kubelet[2778]: E0123 01:00:53.551152 2778 projected.go:194] Error preparing data for projected volume kube-api-access-hh66l for pod kube-system/kube-proxy-6vvxx: configmap "kube-root-ca.crt" not found Jan 23 01:00:53.551431 kubelet[2778]: E0123 01:00:53.551232 2778 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4-kube-api-access-hh66l podName:6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4 nodeName:}" failed. No retries permitted until 2026-01-23 01:00:54.051207998 +0000 UTC m=+6.467841414 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hh66l" (UniqueName: "kubernetes.io/projected/6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4-kube-api-access-hh66l") pod "kube-proxy-6vvxx" (UID: "6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4") : configmap "kube-root-ca.crt" not found Jan 23 01:00:54.270267 containerd[1610]: time="2026-01-23T01:00:54.270206407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6vvxx,Uid:6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4,Namespace:kube-system,Attempt:0,}" Jan 23 01:00:54.310355 containerd[1610]: time="2026-01-23T01:00:54.309536524Z" level=info msg="connecting to shim 20bfbaeccf346bc1b295b7ab7aaf51baecb854aaea7ef767e82e28c2291b8e51" address="unix:///run/containerd/s/4f2b60b514c068e9701b56cfdcc13d94dc2f7639ebcd8a75bde3ffde7f6449b2" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:54.353153 systemd[1]: Started cri-containerd-20bfbaeccf346bc1b295b7ab7aaf51baecb854aaea7ef767e82e28c2291b8e51.scope - libcontainer container 20bfbaeccf346bc1b295b7ab7aaf51baecb854aaea7ef767e82e28c2291b8e51. Jan 23 01:00:54.364364 systemd[1]: Created slice kubepods-besteffort-pod90526e82_e71e_4869_a4c1_c17c77d973a8.slice - libcontainer container kubepods-besteffort-pod90526e82_e71e_4869_a4c1_c17c77d973a8.slice. Jan 23 01:00:54.416425 containerd[1610]: time="2026-01-23T01:00:54.416336771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6vvxx,Uid:6ffbacbc-8b50-4e9d-9cd7-e8485a09f0c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"20bfbaeccf346bc1b295b7ab7aaf51baecb854aaea7ef767e82e28c2291b8e51\"" Jan 23 01:00:54.421581 containerd[1610]: time="2026-01-23T01:00:54.421565114Z" level=info msg="CreateContainer within sandbox \"20bfbaeccf346bc1b295b7ab7aaf51baecb854aaea7ef767e82e28c2291b8e51\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 01:00:54.436896 containerd[1610]: time="2026-01-23T01:00:54.436442168Z" level=info msg="Container c9681c66b3490b32e91646ec4fa74f25d8064d6421c00ad29f067033907bad37: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:54.440498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815428049.mount: Deactivated successfully. Jan 23 01:00:54.445203 kubelet[2778]: I0123 01:00:54.445183 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwgkj\" (UniqueName: \"kubernetes.io/projected/90526e82-e71e-4869-a4c1-c17c77d973a8-kube-api-access-wwgkj\") pod \"tigera-operator-7dcd859c48-djtjt\" (UID: \"90526e82-e71e-4869-a4c1-c17c77d973a8\") " pod="tigera-operator/tigera-operator-7dcd859c48-djtjt" Jan 23 01:00:54.445644 containerd[1610]: time="2026-01-23T01:00:54.445627319Z" level=info msg="CreateContainer within sandbox \"20bfbaeccf346bc1b295b7ab7aaf51baecb854aaea7ef767e82e28c2291b8e51\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c9681c66b3490b32e91646ec4fa74f25d8064d6421c00ad29f067033907bad37\"" Jan 23 01:00:54.445807 kubelet[2778]: I0123 01:00:54.445783 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/90526e82-e71e-4869-a4c1-c17c77d973a8-var-lib-calico\") pod \"tigera-operator-7dcd859c48-djtjt\" (UID: \"90526e82-e71e-4869-a4c1-c17c77d973a8\") " pod="tigera-operator/tigera-operator-7dcd859c48-djtjt" Jan 23 01:00:54.446576 containerd[1610]: time="2026-01-23T01:00:54.446227467Z" level=info msg="StartContainer for \"c9681c66b3490b32e91646ec4fa74f25d8064d6421c00ad29f067033907bad37\"" Jan 23 01:00:54.447219 containerd[1610]: time="2026-01-23T01:00:54.447186832Z" level=info msg="connecting to shim c9681c66b3490b32e91646ec4fa74f25d8064d6421c00ad29f067033907bad37" address="unix:///run/containerd/s/4f2b60b514c068e9701b56cfdcc13d94dc2f7639ebcd8a75bde3ffde7f6449b2" protocol=ttrpc version=3 Jan 23 01:00:54.460841 systemd[1]: Started cri-containerd-c9681c66b3490b32e91646ec4fa74f25d8064d6421c00ad29f067033907bad37.scope - libcontainer container c9681c66b3490b32e91646ec4fa74f25d8064d6421c00ad29f067033907bad37. Jan 23 01:00:54.526370 containerd[1610]: time="2026-01-23T01:00:54.526227873Z" level=info msg="StartContainer for \"c9681c66b3490b32e91646ec4fa74f25d8064d6421c00ad29f067033907bad37\" returns successfully" Jan 23 01:00:54.653802 update_engine[1583]: I20260123 01:00:54.653007 1583 update_attempter.cc:509] Updating boot flags... Jan 23 01:00:54.672725 containerd[1610]: time="2026-01-23T01:00:54.672652795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-djtjt,Uid:90526e82-e71e-4869-a4c1-c17c77d973a8,Namespace:tigera-operator,Attempt:0,}" Jan 23 01:00:54.719157 containerd[1610]: time="2026-01-23T01:00:54.718911335Z" level=info msg="connecting to shim e5b4eae2340848501f907fd61fb0b6f1b0f93d66e4e7850b6feb4a9e763bafa9" address="unix:///run/containerd/s/ca3cca27a5277796ff7d8f640d8820578e943025624b37449a7a9d9486e17af6" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:00:54.753844 systemd[1]: Started cri-containerd-e5b4eae2340848501f907fd61fb0b6f1b0f93d66e4e7850b6feb4a9e763bafa9.scope - libcontainer container e5b4eae2340848501f907fd61fb0b6f1b0f93d66e4e7850b6feb4a9e763bafa9. Jan 23 01:00:54.945560 containerd[1610]: time="2026-01-23T01:00:54.945160365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-djtjt,Uid:90526e82-e71e-4869-a4c1-c17c77d973a8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e5b4eae2340848501f907fd61fb0b6f1b0f93d66e4e7850b6feb4a9e763bafa9\"" Jan 23 01:00:54.947796 containerd[1610]: time="2026-01-23T01:00:54.946961605Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 01:00:55.508945 kubelet[2778]: I0123 01:00:55.508692 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6vvxx" podStartSLOduration=2.508676859 podStartE2EDuration="2.508676859s" podCreationTimestamp="2026-01-23 01:00:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:00:54.804548592 +0000 UTC m=+7.221181988" watchObservedRunningTime="2026-01-23 01:00:55.508676859 +0000 UTC m=+7.925310245" Jan 23 01:00:57.208195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4242437802.mount: Deactivated successfully. Jan 23 01:00:57.890835 containerd[1610]: time="2026-01-23T01:00:57.890771835Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:57.892069 containerd[1610]: time="2026-01-23T01:00:57.891950370Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 01:00:57.893354 containerd[1610]: time="2026-01-23T01:00:57.893316065Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:57.895826 containerd[1610]: time="2026-01-23T01:00:57.895788685Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:00:57.896552 containerd[1610]: time="2026-01-23T01:00:57.896523921Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.94861899s" Jan 23 01:00:57.896646 containerd[1610]: time="2026-01-23T01:00:57.896631841Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 01:00:57.901288 containerd[1610]: time="2026-01-23T01:00:57.901257842Z" level=info msg="CreateContainer within sandbox \"e5b4eae2340848501f907fd61fb0b6f1b0f93d66e4e7850b6feb4a9e763bafa9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 01:00:57.913253 containerd[1610]: time="2026-01-23T01:00:57.913227153Z" level=info msg="Container 154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:00:57.914346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4101625560.mount: Deactivated successfully. Jan 23 01:00:57.924366 containerd[1610]: time="2026-01-23T01:00:57.924329347Z" level=info msg="CreateContainer within sandbox \"e5b4eae2340848501f907fd61fb0b6f1b0f93d66e4e7850b6feb4a9e763bafa9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a\"" Jan 23 01:00:57.925022 containerd[1610]: time="2026-01-23T01:00:57.924983145Z" level=info msg="StartContainer for \"154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a\"" Jan 23 01:00:57.925612 containerd[1610]: time="2026-01-23T01:00:57.925586472Z" level=info msg="connecting to shim 154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a" address="unix:///run/containerd/s/ca3cca27a5277796ff7d8f640d8820578e943025624b37449a7a9d9486e17af6" protocol=ttrpc version=3 Jan 23 01:00:57.947898 systemd[1]: Started cri-containerd-154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a.scope - libcontainer container 154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a. Jan 23 01:00:57.977115 containerd[1610]: time="2026-01-23T01:00:57.977060670Z" level=info msg="StartContainer for \"154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a\" returns successfully" Jan 23 01:00:58.808964 kubelet[2778]: I0123 01:00:58.808847 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-djtjt" podStartSLOduration=1.858110404 podStartE2EDuration="4.808825955s" podCreationTimestamp="2026-01-23 01:00:54 +0000 UTC" firstStartedPulling="2026-01-23 01:00:54.946699107 +0000 UTC m=+7.363332493" lastFinishedPulling="2026-01-23 01:00:57.897414668 +0000 UTC m=+10.314048044" observedRunningTime="2026-01-23 01:00:58.807993089 +0000 UTC m=+11.224626505" watchObservedRunningTime="2026-01-23 01:00:58.808825955 +0000 UTC m=+11.225459371" Jan 23 01:01:03.195326 sudo[1848]: pam_unix(sudo:session): session closed for user root Jan 23 01:01:03.315916 sshd[1847]: Connection closed by 4.153.228.146 port 39538 Jan 23 01:01:03.316438 sshd-session[1844]: pam_unix(sshd:session): session closed for user core Jan 23 01:01:03.320607 systemd-logind[1580]: Session 7 logged out. Waiting for processes to exit. Jan 23 01:01:03.321507 systemd[1]: sshd@6-89.167.6.201:22-4.153.228.146:39538.service: Deactivated successfully. Jan 23 01:01:03.325087 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 01:01:03.325560 systemd[1]: session-7.scope: Consumed 5.609s CPU time, 231.8M memory peak. Jan 23 01:01:03.329711 systemd-logind[1580]: Removed session 7. Jan 23 01:01:08.838575 kubelet[2778]: I0123 01:01:08.838531 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/acf8431d-70ab-4277-ac29-8fcdfa2d76b2-typha-certs\") pod \"calico-typha-5db4dd6f8c-zv8ts\" (UID: \"acf8431d-70ab-4277-ac29-8fcdfa2d76b2\") " pod="calico-system/calico-typha-5db4dd6f8c-zv8ts" Jan 23 01:01:08.838575 kubelet[2778]: I0123 01:01:08.838577 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acf8431d-70ab-4277-ac29-8fcdfa2d76b2-tigera-ca-bundle\") pod \"calico-typha-5db4dd6f8c-zv8ts\" (UID: \"acf8431d-70ab-4277-ac29-8fcdfa2d76b2\") " pod="calico-system/calico-typha-5db4dd6f8c-zv8ts" Jan 23 01:01:08.840259 kubelet[2778]: I0123 01:01:08.838611 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxk2h\" (UniqueName: \"kubernetes.io/projected/acf8431d-70ab-4277-ac29-8fcdfa2d76b2-kube-api-access-cxk2h\") pod \"calico-typha-5db4dd6f8c-zv8ts\" (UID: \"acf8431d-70ab-4277-ac29-8fcdfa2d76b2\") " pod="calico-system/calico-typha-5db4dd6f8c-zv8ts" Jan 23 01:01:08.843386 systemd[1]: Created slice kubepods-besteffort-podacf8431d_70ab_4277_ac29_8fcdfa2d76b2.slice - libcontainer container kubepods-besteffort-podacf8431d_70ab_4277_ac29_8fcdfa2d76b2.slice. Jan 23 01:01:09.078090 systemd[1]: Created slice kubepods-besteffort-pod0cbaed9c_1e9e_46ac_b3d1_c67f640303a3.slice - libcontainer container kubepods-besteffort-pod0cbaed9c_1e9e_46ac_b3d1_c67f640303a3.slice. Jan 23 01:01:09.142027 kubelet[2778]: I0123 01:01:09.141812 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-cni-log-dir\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.142027 kubelet[2778]: I0123 01:01:09.141913 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-lib-modules\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.142027 kubelet[2778]: I0123 01:01:09.141982 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-cni-bin-dir\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.142027 kubelet[2778]: I0123 01:01:09.142031 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-policysync\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.142322 kubelet[2778]: I0123 01:01:09.142063 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-var-run-calico\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.142322 kubelet[2778]: I0123 01:01:09.142095 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-flexvol-driver-host\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.142322 kubelet[2778]: I0123 01:01:09.142124 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zd99\" (UniqueName: \"kubernetes.io/projected/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-kube-api-access-6zd99\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.142322 kubelet[2778]: I0123 01:01:09.142148 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-cni-net-dir\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.142322 kubelet[2778]: I0123 01:01:09.142188 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-node-certs\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.142533 kubelet[2778]: I0123 01:01:09.142209 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-tigera-ca-bundle\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.142533 kubelet[2778]: I0123 01:01:09.142234 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-xtables-lock\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.142533 kubelet[2778]: I0123 01:01:09.142259 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0cbaed9c-1e9e-46ac-b3d1-c67f640303a3-var-lib-calico\") pod \"calico-node-ln8zh\" (UID: \"0cbaed9c-1e9e-46ac-b3d1-c67f640303a3\") " pod="calico-system/calico-node-ln8zh" Jan 23 01:01:09.159835 containerd[1610]: time="2026-01-23T01:01:09.159595839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db4dd6f8c-zv8ts,Uid:acf8431d-70ab-4277-ac29-8fcdfa2d76b2,Namespace:calico-system,Attempt:0,}" Jan 23 01:01:09.197221 containerd[1610]: time="2026-01-23T01:01:09.196945545Z" level=info msg="connecting to shim 621d6180d40a1080fdfbf936b2ec365868b0d43d80616605c180269ec62124b8" address="unix:///run/containerd/s/c09e7e2aa50abff4fe2a903873054117288d5473031ee146f794581542c0cc2f" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:01:09.226952 systemd[1]: Started cri-containerd-621d6180d40a1080fdfbf936b2ec365868b0d43d80616605c180269ec62124b8.scope - libcontainer container 621d6180d40a1080fdfbf936b2ec365868b0d43d80616605c180269ec62124b8. Jan 23 01:01:09.250766 kubelet[2778]: E0123 01:01:09.248673 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.250766 kubelet[2778]: W0123 01:01:09.248725 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.250924 kubelet[2778]: E0123 01:01:09.250911 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.252282 kubelet[2778]: E0123 01:01:09.252265 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.252728 kubelet[2778]: W0123 01:01:09.252357 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.252728 kubelet[2778]: E0123 01:01:09.252372 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.263856 kubelet[2778]: E0123 01:01:09.263791 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.263856 kubelet[2778]: W0123 01:01:09.263807 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.263856 kubelet[2778]: E0123 01:01:09.263822 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.274417 kubelet[2778]: E0123 01:01:09.272871 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:01:09.328126 containerd[1610]: time="2026-01-23T01:01:09.328072916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5db4dd6f8c-zv8ts,Uid:acf8431d-70ab-4277-ac29-8fcdfa2d76b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"621d6180d40a1080fdfbf936b2ec365868b0d43d80616605c180269ec62124b8\"" Jan 23 01:01:09.331529 containerd[1610]: time="2026-01-23T01:01:09.331486600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 01:01:09.337156 kubelet[2778]: E0123 01:01:09.337062 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.337156 kubelet[2778]: W0123 01:01:09.337082 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.337156 kubelet[2778]: E0123 01:01:09.337101 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.337632 kubelet[2778]: E0123 01:01:09.337562 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.337632 kubelet[2778]: W0123 01:01:09.337576 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.337632 kubelet[2778]: E0123 01:01:09.337590 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.337916 kubelet[2778]: E0123 01:01:09.337906 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.338002 kubelet[2778]: W0123 01:01:09.337957 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.338002 kubelet[2778]: E0123 01:01:09.337967 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.338423 kubelet[2778]: E0123 01:01:09.338297 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.338423 kubelet[2778]: W0123 01:01:09.338323 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.338423 kubelet[2778]: E0123 01:01:09.338346 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.339006 kubelet[2778]: E0123 01:01:09.338977 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.339051 kubelet[2778]: W0123 01:01:09.338998 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.339163 kubelet[2778]: E0123 01:01:09.339138 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.339955 kubelet[2778]: E0123 01:01:09.339929 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.339955 kubelet[2778]: W0123 01:01:09.339947 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.340015 kubelet[2778]: E0123 01:01:09.339962 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.340579 kubelet[2778]: E0123 01:01:09.340559 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.340579 kubelet[2778]: W0123 01:01:09.340575 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.340637 kubelet[2778]: E0123 01:01:09.340586 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.341809 kubelet[2778]: E0123 01:01:09.341219 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.341809 kubelet[2778]: W0123 01:01:09.341233 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.341809 kubelet[2778]: E0123 01:01:09.341252 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.341809 kubelet[2778]: E0123 01:01:09.341679 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.341809 kubelet[2778]: W0123 01:01:09.341688 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.341809 kubelet[2778]: E0123 01:01:09.341719 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.342255 kubelet[2778]: E0123 01:01:09.342231 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.342524 kubelet[2778]: W0123 01:01:09.342500 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.342524 kubelet[2778]: E0123 01:01:09.342518 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.342980 kubelet[2778]: E0123 01:01:09.342958 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.342980 kubelet[2778]: W0123 01:01:09.342974 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.343027 kubelet[2778]: E0123 01:01:09.342983 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.343427 kubelet[2778]: E0123 01:01:09.343402 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.343427 kubelet[2778]: W0123 01:01:09.343417 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.343427 kubelet[2778]: E0123 01:01:09.343425 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.343849 kubelet[2778]: E0123 01:01:09.343828 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.343849 kubelet[2778]: W0123 01:01:09.343843 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.343904 kubelet[2778]: E0123 01:01:09.343852 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.344379 kubelet[2778]: E0123 01:01:09.344355 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.344379 kubelet[2778]: W0123 01:01:09.344372 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.344427 kubelet[2778]: E0123 01:01:09.344382 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.344591 kubelet[2778]: E0123 01:01:09.344569 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.344591 kubelet[2778]: W0123 01:01:09.344584 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.344630 kubelet[2778]: E0123 01:01:09.344591 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.344931 kubelet[2778]: E0123 01:01:09.344911 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.344931 kubelet[2778]: W0123 01:01:09.344926 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.344985 kubelet[2778]: E0123 01:01:09.344937 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.345576 kubelet[2778]: E0123 01:01:09.345549 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.345576 kubelet[2778]: W0123 01:01:09.345566 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.345576 kubelet[2778]: E0123 01:01:09.345575 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.346764 kubelet[2778]: E0123 01:01:09.346651 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.346764 kubelet[2778]: W0123 01:01:09.346666 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.346764 kubelet[2778]: E0123 01:01:09.346675 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.346925 kubelet[2778]: E0123 01:01:09.346905 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.346925 kubelet[2778]: W0123 01:01:09.346920 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.346965 kubelet[2778]: E0123 01:01:09.346928 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.347116 kubelet[2778]: E0123 01:01:09.347098 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.347116 kubelet[2778]: W0123 01:01:09.347111 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.347159 kubelet[2778]: E0123 01:01:09.347118 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.347437 kubelet[2778]: E0123 01:01:09.347417 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.347437 kubelet[2778]: W0123 01:01:09.347431 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.347476 kubelet[2778]: E0123 01:01:09.347441 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.347494 kubelet[2778]: I0123 01:01:09.347473 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5259dbbd-3b68-4b5b-9de4-297f13034dcd-registration-dir\") pod \"csi-node-driver-wc28p\" (UID: \"5259dbbd-3b68-4b5b-9de4-297f13034dcd\") " pod="calico-system/csi-node-driver-wc28p" Jan 23 01:01:09.347775 kubelet[2778]: E0123 01:01:09.347739 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.348795 kubelet[2778]: W0123 01:01:09.347812 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.348795 kubelet[2778]: E0123 01:01:09.347826 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.348795 kubelet[2778]: I0123 01:01:09.347846 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2968\" (UniqueName: \"kubernetes.io/projected/5259dbbd-3b68-4b5b-9de4-297f13034dcd-kube-api-access-h2968\") pod \"csi-node-driver-wc28p\" (UID: \"5259dbbd-3b68-4b5b-9de4-297f13034dcd\") " pod="calico-system/csi-node-driver-wc28p" Jan 23 01:01:09.348795 kubelet[2778]: E0123 01:01:09.348279 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.348795 kubelet[2778]: W0123 01:01:09.348290 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.348795 kubelet[2778]: E0123 01:01:09.348299 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.348795 kubelet[2778]: I0123 01:01:09.348414 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5259dbbd-3b68-4b5b-9de4-297f13034dcd-kubelet-dir\") pod \"csi-node-driver-wc28p\" (UID: \"5259dbbd-3b68-4b5b-9de4-297f13034dcd\") " pod="calico-system/csi-node-driver-wc28p" Jan 23 01:01:09.348985 kubelet[2778]: E0123 01:01:09.348798 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.348985 kubelet[2778]: W0123 01:01:09.348982 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.349022 kubelet[2778]: E0123 01:01:09.348994 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.349043 kubelet[2778]: I0123 01:01:09.349027 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5259dbbd-3b68-4b5b-9de4-297f13034dcd-socket-dir\") pod \"csi-node-driver-wc28p\" (UID: \"5259dbbd-3b68-4b5b-9de4-297f13034dcd\") " pod="calico-system/csi-node-driver-wc28p" Jan 23 01:01:09.349524 kubelet[2778]: E0123 01:01:09.349493 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.349524 kubelet[2778]: W0123 01:01:09.349519 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.349583 kubelet[2778]: E0123 01:01:09.349530 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.349803 kubelet[2778]: I0123 01:01:09.349692 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5259dbbd-3b68-4b5b-9de4-297f13034dcd-varrun\") pod \"csi-node-driver-wc28p\" (UID: \"5259dbbd-3b68-4b5b-9de4-297f13034dcd\") " pod="calico-system/csi-node-driver-wc28p" Jan 23 01:01:09.350597 kubelet[2778]: E0123 01:01:09.350569 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.350597 kubelet[2778]: W0123 01:01:09.350590 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.350654 kubelet[2778]: E0123 01:01:09.350602 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.350969 kubelet[2778]: E0123 01:01:09.350943 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.350969 kubelet[2778]: W0123 01:01:09.350962 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.351013 kubelet[2778]: E0123 01:01:09.350971 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.351217 kubelet[2778]: E0123 01:01:09.351196 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.351217 kubelet[2778]: W0123 01:01:09.351211 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.351266 kubelet[2778]: E0123 01:01:09.351219 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.351890 kubelet[2778]: E0123 01:01:09.351867 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.351890 kubelet[2778]: W0123 01:01:09.351886 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.351944 kubelet[2778]: E0123 01:01:09.351895 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.352491 kubelet[2778]: E0123 01:01:09.352470 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.352491 kubelet[2778]: W0123 01:01:09.352485 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.352491 kubelet[2778]: E0123 01:01:09.352494 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.352672 kubelet[2778]: E0123 01:01:09.352655 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.352672 kubelet[2778]: W0123 01:01:09.352666 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.352672 kubelet[2778]: E0123 01:01:09.352672 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.353910 kubelet[2778]: E0123 01:01:09.353887 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.353910 kubelet[2778]: W0123 01:01:09.353902 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.353910 kubelet[2778]: E0123 01:01:09.353909 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.354834 kubelet[2778]: E0123 01:01:09.354159 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.354834 kubelet[2778]: W0123 01:01:09.354788 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.354834 kubelet[2778]: E0123 01:01:09.354812 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.355158 kubelet[2778]: E0123 01:01:09.355149 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.355206 kubelet[2778]: W0123 01:01:09.355198 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.355238 kubelet[2778]: E0123 01:01:09.355231 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.355503 kubelet[2778]: E0123 01:01:09.355473 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.355503 kubelet[2778]: W0123 01:01:09.355483 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.355503 kubelet[2778]: E0123 01:01:09.355490 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.382309 containerd[1610]: time="2026-01-23T01:01:09.382271777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ln8zh,Uid:0cbaed9c-1e9e-46ac-b3d1-c67f640303a3,Namespace:calico-system,Attempt:0,}" Jan 23 01:01:09.414035 containerd[1610]: time="2026-01-23T01:01:09.413907891Z" level=info msg="connecting to shim d6e63087727531102b124ce2fc9f8cbdefde6c3e44928e4a14ea87f26ad95833" address="unix:///run/containerd/s/9af8436cf1bf8d50d047b9ad083af31c66c7bbd81edd59b1eef7a8c472e758ab" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:01:09.449995 systemd[1]: Started cri-containerd-d6e63087727531102b124ce2fc9f8cbdefde6c3e44928e4a14ea87f26ad95833.scope - libcontainer container d6e63087727531102b124ce2fc9f8cbdefde6c3e44928e4a14ea87f26ad95833. Jan 23 01:01:09.450800 kubelet[2778]: E0123 01:01:09.450772 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.450800 kubelet[2778]: W0123 01:01:09.450794 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.451111 kubelet[2778]: E0123 01:01:09.450810 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.451673 kubelet[2778]: E0123 01:01:09.451656 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.451830 kubelet[2778]: W0123 01:01:09.451756 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.451830 kubelet[2778]: E0123 01:01:09.451780 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.452276 kubelet[2778]: E0123 01:01:09.452242 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.452276 kubelet[2778]: W0123 01:01:09.452257 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.452276 kubelet[2778]: E0123 01:01:09.452271 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.452692 kubelet[2778]: E0123 01:01:09.452658 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.452692 kubelet[2778]: W0123 01:01:09.452670 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.452692 kubelet[2778]: E0123 01:01:09.452681 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.453430 kubelet[2778]: E0123 01:01:09.453380 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.453430 kubelet[2778]: W0123 01:01:09.453412 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.453430 kubelet[2778]: E0123 01:01:09.453420 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.453890 kubelet[2778]: E0123 01:01:09.453861 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.453890 kubelet[2778]: W0123 01:01:09.453870 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.453890 kubelet[2778]: E0123 01:01:09.453879 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.454586 kubelet[2778]: E0123 01:01:09.454575 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.454739 kubelet[2778]: W0123 01:01:09.454667 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.454739 kubelet[2778]: E0123 01:01:09.454683 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.455266 kubelet[2778]: E0123 01:01:09.455227 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.455266 kubelet[2778]: W0123 01:01:09.455237 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.455266 kubelet[2778]: E0123 01:01:09.455244 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.455575 kubelet[2778]: E0123 01:01:09.455530 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.455575 kubelet[2778]: W0123 01:01:09.455546 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.455575 kubelet[2778]: E0123 01:01:09.455559 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.455800 kubelet[2778]: E0123 01:01:09.455778 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.455800 kubelet[2778]: W0123 01:01:09.455793 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.455800 kubelet[2778]: E0123 01:01:09.455801 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.455996 kubelet[2778]: E0123 01:01:09.455977 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.455996 kubelet[2778]: W0123 01:01:09.455988 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.455996 kubelet[2778]: E0123 01:01:09.455994 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.456146 kubelet[2778]: E0123 01:01:09.456139 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.456146 kubelet[2778]: W0123 01:01:09.456145 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.456312 kubelet[2778]: E0123 01:01:09.456153 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.456364 kubelet[2778]: E0123 01:01:09.456321 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.456364 kubelet[2778]: W0123 01:01:09.456327 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.456364 kubelet[2778]: E0123 01:01:09.456333 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.456550 kubelet[2778]: E0123 01:01:09.456507 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.456550 kubelet[2778]: W0123 01:01:09.456520 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.456550 kubelet[2778]: E0123 01:01:09.456526 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.456787 kubelet[2778]: E0123 01:01:09.456727 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.456787 kubelet[2778]: W0123 01:01:09.456739 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.456787 kubelet[2778]: E0123 01:01:09.456757 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.457890 kubelet[2778]: E0123 01:01:09.457873 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.457890 kubelet[2778]: W0123 01:01:09.457886 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.457955 kubelet[2778]: E0123 01:01:09.457895 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.458083 kubelet[2778]: E0123 01:01:09.458062 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.458083 kubelet[2778]: W0123 01:01:09.458077 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.458116 kubelet[2778]: E0123 01:01:09.458106 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.458332 kubelet[2778]: E0123 01:01:09.458311 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.458332 kubelet[2778]: W0123 01:01:09.458326 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.458374 kubelet[2778]: E0123 01:01:09.458335 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.458531 kubelet[2778]: E0123 01:01:09.458515 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.458531 kubelet[2778]: W0123 01:01:09.458527 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.458569 kubelet[2778]: E0123 01:01:09.458533 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.459010 kubelet[2778]: E0123 01:01:09.458991 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.459010 kubelet[2778]: W0123 01:01:09.459005 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.459010 kubelet[2778]: E0123 01:01:09.459012 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.459256 kubelet[2778]: E0123 01:01:09.459236 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.459256 kubelet[2778]: W0123 01:01:09.459249 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.459297 kubelet[2778]: E0123 01:01:09.459262 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.459538 kubelet[2778]: E0123 01:01:09.459478 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.459538 kubelet[2778]: W0123 01:01:09.459492 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.459538 kubelet[2778]: E0123 01:01:09.459502 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.460273 kubelet[2778]: E0123 01:01:09.460254 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.460273 kubelet[2778]: W0123 01:01:09.460269 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.460328 kubelet[2778]: E0123 01:01:09.460277 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.460569 kubelet[2778]: E0123 01:01:09.460549 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.460616 kubelet[2778]: W0123 01:01:09.460563 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.460616 kubelet[2778]: E0123 01:01:09.460613 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.461230 kubelet[2778]: E0123 01:01:09.461177 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.461230 kubelet[2778]: W0123 01:01:09.461190 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.461230 kubelet[2778]: E0123 01:01:09.461198 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.466630 kubelet[2778]: E0123 01:01:09.466570 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:09.466630 kubelet[2778]: W0123 01:01:09.466582 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:09.466630 kubelet[2778]: E0123 01:01:09.466591 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:09.485476 containerd[1610]: time="2026-01-23T01:01:09.485388928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ln8zh,Uid:0cbaed9c-1e9e-46ac-b3d1-c67f640303a3,Namespace:calico-system,Attempt:0,} returns sandbox id \"d6e63087727531102b124ce2fc9f8cbdefde6c3e44928e4a14ea87f26ad95833\"" Jan 23 01:01:10.731866 kubelet[2778]: E0123 01:01:10.731780 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:01:11.258441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1494218291.mount: Deactivated successfully. Jan 23 01:01:12.324358 containerd[1610]: time="2026-01-23T01:01:12.324281783Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:12.325889 containerd[1610]: time="2026-01-23T01:01:12.325854301Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 01:01:12.327550 containerd[1610]: time="2026-01-23T01:01:12.326821941Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:12.328469 containerd[1610]: time="2026-01-23T01:01:12.328444058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:12.328762 containerd[1610]: time="2026-01-23T01:01:12.328681728Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.996898488s" Jan 23 01:01:12.328821 containerd[1610]: time="2026-01-23T01:01:12.328810668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 01:01:12.330401 containerd[1610]: time="2026-01-23T01:01:12.330374907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 01:01:12.344459 containerd[1610]: time="2026-01-23T01:01:12.343803783Z" level=info msg="CreateContainer within sandbox \"621d6180d40a1080fdfbf936b2ec365868b0d43d80616605c180269ec62124b8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 01:01:12.351377 containerd[1610]: time="2026-01-23T01:01:12.349320876Z" level=info msg="Container ceffb192a4521b2179ed031d2ddd90a7ac4a8383ec654aea4b8390f383bf1348: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:01:12.355846 containerd[1610]: time="2026-01-23T01:01:12.355810300Z" level=info msg="CreateContainer within sandbox \"621d6180d40a1080fdfbf936b2ec365868b0d43d80616605c180269ec62124b8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ceffb192a4521b2179ed031d2ddd90a7ac4a8383ec654aea4b8390f383bf1348\"" Jan 23 01:01:12.356571 containerd[1610]: time="2026-01-23T01:01:12.356481189Z" level=info msg="StartContainer for \"ceffb192a4521b2179ed031d2ddd90a7ac4a8383ec654aea4b8390f383bf1348\"" Jan 23 01:01:12.357565 containerd[1610]: time="2026-01-23T01:01:12.357505288Z" level=info msg="connecting to shim ceffb192a4521b2179ed031d2ddd90a7ac4a8383ec654aea4b8390f383bf1348" address="unix:///run/containerd/s/c09e7e2aa50abff4fe2a903873054117288d5473031ee146f794581542c0cc2f" protocol=ttrpc version=3 Jan 23 01:01:12.377900 systemd[1]: Started cri-containerd-ceffb192a4521b2179ed031d2ddd90a7ac4a8383ec654aea4b8390f383bf1348.scope - libcontainer container ceffb192a4521b2179ed031d2ddd90a7ac4a8383ec654aea4b8390f383bf1348. Jan 23 01:01:12.438521 containerd[1610]: time="2026-01-23T01:01:12.438474924Z" level=info msg="StartContainer for \"ceffb192a4521b2179ed031d2ddd90a7ac4a8383ec654aea4b8390f383bf1348\" returns successfully" Jan 23 01:01:12.732820 kubelet[2778]: E0123 01:01:12.732633 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:01:12.872282 kubelet[2778]: E0123 01:01:12.872230 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.872282 kubelet[2778]: W0123 01:01:12.872268 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.872954 kubelet[2778]: E0123 01:01:12.872309 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.872954 kubelet[2778]: E0123 01:01:12.872669 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.872954 kubelet[2778]: W0123 01:01:12.872734 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.872954 kubelet[2778]: E0123 01:01:12.872800 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.873534 kubelet[2778]: E0123 01:01:12.873282 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.873534 kubelet[2778]: W0123 01:01:12.873299 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.873534 kubelet[2778]: E0123 01:01:12.873313 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.874117 kubelet[2778]: E0123 01:01:12.874076 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.874303 kubelet[2778]: W0123 01:01:12.874253 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.874303 kubelet[2778]: E0123 01:01:12.874282 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.874987 kubelet[2778]: E0123 01:01:12.874945 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.875065 kubelet[2778]: W0123 01:01:12.875000 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.875065 kubelet[2778]: E0123 01:01:12.875015 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.875627 kubelet[2778]: E0123 01:01:12.875566 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.875627 kubelet[2778]: W0123 01:01:12.875588 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.875627 kubelet[2778]: E0123 01:01:12.875603 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.876319 kubelet[2778]: E0123 01:01:12.876273 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.876319 kubelet[2778]: W0123 01:01:12.876298 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.876319 kubelet[2778]: E0123 01:01:12.876312 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.876963 kubelet[2778]: E0123 01:01:12.876853 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.876963 kubelet[2778]: W0123 01:01:12.876871 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.876963 kubelet[2778]: E0123 01:01:12.876885 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.877413 kubelet[2778]: E0123 01:01:12.877361 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.877413 kubelet[2778]: W0123 01:01:12.877380 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.877413 kubelet[2778]: E0123 01:01:12.877394 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.878580 kubelet[2778]: E0123 01:01:12.877868 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.878580 kubelet[2778]: W0123 01:01:12.877886 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.878580 kubelet[2778]: E0123 01:01:12.877899 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.878580 kubelet[2778]: E0123 01:01:12.878368 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.878580 kubelet[2778]: W0123 01:01:12.878398 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.878580 kubelet[2778]: E0123 01:01:12.878411 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.879006 kubelet[2778]: E0123 01:01:12.878962 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.879006 kubelet[2778]: W0123 01:01:12.878994 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.879006 kubelet[2778]: E0123 01:01:12.879008 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.879504 kubelet[2778]: E0123 01:01:12.879413 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.879504 kubelet[2778]: W0123 01:01:12.879431 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.879504 kubelet[2778]: E0123 01:01:12.879445 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.879953 kubelet[2778]: E0123 01:01:12.879885 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.879953 kubelet[2778]: W0123 01:01:12.879898 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.879953 kubelet[2778]: E0123 01:01:12.879912 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.880570 kubelet[2778]: E0123 01:01:12.880250 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.880570 kubelet[2778]: W0123 01:01:12.880267 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.880570 kubelet[2778]: E0123 01:01:12.880279 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.883105 kubelet[2778]: E0123 01:01:12.883062 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.883105 kubelet[2778]: W0123 01:01:12.883091 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.883105 kubelet[2778]: E0123 01:01:12.883109 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.883766 kubelet[2778]: E0123 01:01:12.883672 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.883766 kubelet[2778]: W0123 01:01:12.883698 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.883766 kubelet[2778]: E0123 01:01:12.883728 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.884626 kubelet[2778]: E0123 01:01:12.884470 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.884626 kubelet[2778]: W0123 01:01:12.884586 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.884626 kubelet[2778]: E0123 01:01:12.884601 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.885339 kubelet[2778]: E0123 01:01:12.885255 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.885339 kubelet[2778]: W0123 01:01:12.885320 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.885339 kubelet[2778]: E0123 01:01:12.885336 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.886185 kubelet[2778]: E0123 01:01:12.886148 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.886185 kubelet[2778]: W0123 01:01:12.886172 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.886337 kubelet[2778]: E0123 01:01:12.886223 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.886985 kubelet[2778]: E0123 01:01:12.886932 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.886985 kubelet[2778]: W0123 01:01:12.886957 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.887406 kubelet[2778]: E0123 01:01:12.886972 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.887679 kubelet[2778]: E0123 01:01:12.887628 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.887679 kubelet[2778]: W0123 01:01:12.887651 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.887679 kubelet[2778]: E0123 01:01:12.887665 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.888204 kubelet[2778]: E0123 01:01:12.888162 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.888204 kubelet[2778]: W0123 01:01:12.888185 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.888204 kubelet[2778]: E0123 01:01:12.888199 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.888950 kubelet[2778]: E0123 01:01:12.888859 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.888950 kubelet[2778]: W0123 01:01:12.888879 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.888950 kubelet[2778]: E0123 01:01:12.888893 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.889979 kubelet[2778]: E0123 01:01:12.889927 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.889979 kubelet[2778]: W0123 01:01:12.889970 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.889979 kubelet[2778]: E0123 01:01:12.889987 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.890946 kubelet[2778]: E0123 01:01:12.890907 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.890946 kubelet[2778]: W0123 01:01:12.890936 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.891070 kubelet[2778]: E0123 01:01:12.890957 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.891380 kubelet[2778]: E0123 01:01:12.891337 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.891380 kubelet[2778]: W0123 01:01:12.891366 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.891380 kubelet[2778]: E0123 01:01:12.891381 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.892311 kubelet[2778]: E0123 01:01:12.892256 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.892311 kubelet[2778]: W0123 01:01:12.892281 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.892311 kubelet[2778]: E0123 01:01:12.892296 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.892823 kubelet[2778]: E0123 01:01:12.892790 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.892823 kubelet[2778]: W0123 01:01:12.892810 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.892823 kubelet[2778]: E0123 01:01:12.892825 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.893298 kubelet[2778]: E0123 01:01:12.893246 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.893298 kubelet[2778]: W0123 01:01:12.893269 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.893298 kubelet[2778]: E0123 01:01:12.893282 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.894034 kubelet[2778]: E0123 01:01:12.893982 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.894034 kubelet[2778]: W0123 01:01:12.894010 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.894034 kubelet[2778]: E0123 01:01:12.894029 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.895219 kubelet[2778]: E0123 01:01:12.894940 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.895219 kubelet[2778]: W0123 01:01:12.894978 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.895219 kubelet[2778]: E0123 01:01:12.895014 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:12.895840 kubelet[2778]: E0123 01:01:12.895778 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:12.895840 kubelet[2778]: W0123 01:01:12.895830 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:12.895957 kubelet[2778]: E0123 01:01:12.895859 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.840012 kubelet[2778]: I0123 01:01:13.839963 2778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:01:13.886421 kubelet[2778]: E0123 01:01:13.886384 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.886421 kubelet[2778]: W0123 01:01:13.886415 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.886693 kubelet[2778]: E0123 01:01:13.886442 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.886978 kubelet[2778]: E0123 01:01:13.886924 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.886978 kubelet[2778]: W0123 01:01:13.886961 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.886978 kubelet[2778]: E0123 01:01:13.886997 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.887855 kubelet[2778]: E0123 01:01:13.887515 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.887855 kubelet[2778]: W0123 01:01:13.887542 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.887855 kubelet[2778]: E0123 01:01:13.887575 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.888302 kubelet[2778]: E0123 01:01:13.888235 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.888403 kubelet[2778]: W0123 01:01:13.888309 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.888403 kubelet[2778]: E0123 01:01:13.888327 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.888876 kubelet[2778]: E0123 01:01:13.888848 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.888931 kubelet[2778]: W0123 01:01:13.888875 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.888931 kubelet[2778]: E0123 01:01:13.888894 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.889352 kubelet[2778]: E0123 01:01:13.889319 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.889352 kubelet[2778]: W0123 01:01:13.889343 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.889854 kubelet[2778]: E0123 01:01:13.889359 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.889854 kubelet[2778]: E0123 01:01:13.889694 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.889854 kubelet[2778]: W0123 01:01:13.889707 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.889854 kubelet[2778]: E0123 01:01:13.889782 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.890627 kubelet[2778]: E0123 01:01:13.890238 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.890627 kubelet[2778]: W0123 01:01:13.890256 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.890627 kubelet[2778]: E0123 01:01:13.890272 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.891141 kubelet[2778]: E0123 01:01:13.890666 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.891141 kubelet[2778]: W0123 01:01:13.890679 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.891141 kubelet[2778]: E0123 01:01:13.890694 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.891369 kubelet[2778]: E0123 01:01:13.891225 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.891369 kubelet[2778]: W0123 01:01:13.891246 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.891369 kubelet[2778]: E0123 01:01:13.891262 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.891678 kubelet[2778]: E0123 01:01:13.891623 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.891678 kubelet[2778]: W0123 01:01:13.891639 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.891678 kubelet[2778]: E0123 01:01:13.891654 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.892445 kubelet[2778]: E0123 01:01:13.892163 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.892445 kubelet[2778]: W0123 01:01:13.892189 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.892445 kubelet[2778]: E0123 01:01:13.892223 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.893366 kubelet[2778]: E0123 01:01:13.892617 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.893366 kubelet[2778]: W0123 01:01:13.892630 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.893366 kubelet[2778]: E0123 01:01:13.892645 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.893366 kubelet[2778]: E0123 01:01:13.893084 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.893366 kubelet[2778]: W0123 01:01:13.893102 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.893366 kubelet[2778]: E0123 01:01:13.893122 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.894572 kubelet[2778]: E0123 01:01:13.893529 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.894572 kubelet[2778]: W0123 01:01:13.893545 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.894572 kubelet[2778]: E0123 01:01:13.893560 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.894572 kubelet[2778]: E0123 01:01:13.894439 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.894572 kubelet[2778]: W0123 01:01:13.894455 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.894572 kubelet[2778]: E0123 01:01:13.894471 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.895234 kubelet[2778]: E0123 01:01:13.895007 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.895234 kubelet[2778]: W0123 01:01:13.895026 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.895234 kubelet[2778]: E0123 01:01:13.895047 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.895643 kubelet[2778]: E0123 01:01:13.895596 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.895643 kubelet[2778]: W0123 01:01:13.895619 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.895643 kubelet[2778]: E0123 01:01:13.895635 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.896324 kubelet[2778]: E0123 01:01:13.896246 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.896324 kubelet[2778]: W0123 01:01:13.896271 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.896324 kubelet[2778]: E0123 01:01:13.896287 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.896872 kubelet[2778]: E0123 01:01:13.896819 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.896872 kubelet[2778]: W0123 01:01:13.896849 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.896872 kubelet[2778]: E0123 01:01:13.896872 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.897345 kubelet[2778]: E0123 01:01:13.897312 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.897345 kubelet[2778]: W0123 01:01:13.897336 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.897503 kubelet[2778]: E0123 01:01:13.897351 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.898037 kubelet[2778]: E0123 01:01:13.897993 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.898037 kubelet[2778]: W0123 01:01:13.898031 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.898304 kubelet[2778]: E0123 01:01:13.898062 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.898893 kubelet[2778]: E0123 01:01:13.898593 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.898893 kubelet[2778]: W0123 01:01:13.898615 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.898893 kubelet[2778]: E0123 01:01:13.898631 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.899138 kubelet[2778]: E0123 01:01:13.899096 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.899138 kubelet[2778]: W0123 01:01:13.899121 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.899339 kubelet[2778]: E0123 01:01:13.899147 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.900325 kubelet[2778]: E0123 01:01:13.900286 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.900325 kubelet[2778]: W0123 01:01:13.900313 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.900325 kubelet[2778]: E0123 01:01:13.900330 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.901124 kubelet[2778]: E0123 01:01:13.901054 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.901124 kubelet[2778]: W0123 01:01:13.901076 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.902073 kubelet[2778]: E0123 01:01:13.901798 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.902348 kubelet[2778]: E0123 01:01:13.902323 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.902441 kubelet[2778]: W0123 01:01:13.902423 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.902532 kubelet[2778]: E0123 01:01:13.902516 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.903200 kubelet[2778]: E0123 01:01:13.902984 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.903200 kubelet[2778]: W0123 01:01:13.903002 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.903200 kubelet[2778]: E0123 01:01:13.903017 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.904074 kubelet[2778]: E0123 01:01:13.904037 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.904219 kubelet[2778]: W0123 01:01:13.904188 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.904820 kubelet[2778]: E0123 01:01:13.904390 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.904920 kubelet[2778]: E0123 01:01:13.904897 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.904920 kubelet[2778]: W0123 01:01:13.904913 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.904993 kubelet[2778]: E0123 01:01:13.904930 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.905409 kubelet[2778]: E0123 01:01:13.905377 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.905409 kubelet[2778]: W0123 01:01:13.905401 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.905539 kubelet[2778]: E0123 01:01:13.905422 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.906146 kubelet[2778]: E0123 01:01:13.906117 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.906244 kubelet[2778]: W0123 01:01:13.906146 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.906244 kubelet[2778]: E0123 01:01:13.906177 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:13.906614 kubelet[2778]: E0123 01:01:13.906578 2778 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 01:01:13.906614 kubelet[2778]: W0123 01:01:13.906602 2778 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 01:01:13.906698 kubelet[2778]: E0123 01:01:13.906621 2778 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 01:01:14.204538 containerd[1610]: time="2026-01-23T01:01:14.204415378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:14.206801 containerd[1610]: time="2026-01-23T01:01:14.206089986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 01:01:14.207128 containerd[1610]: time="2026-01-23T01:01:14.207087125Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:14.208942 containerd[1610]: time="2026-01-23T01:01:14.208893764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:14.209334 containerd[1610]: time="2026-01-23T01:01:14.209308264Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.878904457s" Jan 23 01:01:14.209372 containerd[1610]: time="2026-01-23T01:01:14.209335874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 01:01:14.213643 containerd[1610]: time="2026-01-23T01:01:14.213606509Z" level=info msg="CreateContainer within sandbox \"d6e63087727531102b124ce2fc9f8cbdefde6c3e44928e4a14ea87f26ad95833\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 01:01:14.231400 containerd[1610]: time="2026-01-23T01:01:14.228878577Z" level=info msg="Container a4d89c1d01ff5729826d1ef86f1a615fb3eed6230927f24bd4fd170c195eeb06: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:01:14.238734 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4274448846.mount: Deactivated successfully. Jan 23 01:01:14.241736 containerd[1610]: time="2026-01-23T01:01:14.241644867Z" level=info msg="CreateContainer within sandbox \"d6e63087727531102b124ce2fc9f8cbdefde6c3e44928e4a14ea87f26ad95833\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"a4d89c1d01ff5729826d1ef86f1a615fb3eed6230927f24bd4fd170c195eeb06\"" Jan 23 01:01:14.242209 containerd[1610]: time="2026-01-23T01:01:14.242178746Z" level=info msg="StartContainer for \"a4d89c1d01ff5729826d1ef86f1a615fb3eed6230927f24bd4fd170c195eeb06\"" Jan 23 01:01:14.243522 containerd[1610]: time="2026-01-23T01:01:14.243465075Z" level=info msg="connecting to shim a4d89c1d01ff5729826d1ef86f1a615fb3eed6230927f24bd4fd170c195eeb06" address="unix:///run/containerd/s/9af8436cf1bf8d50d047b9ad083af31c66c7bbd81edd59b1eef7a8c472e758ab" protocol=ttrpc version=3 Jan 23 01:01:14.267985 systemd[1]: Started cri-containerd-a4d89c1d01ff5729826d1ef86f1a615fb3eed6230927f24bd4fd170c195eeb06.scope - libcontainer container a4d89c1d01ff5729826d1ef86f1a615fb3eed6230927f24bd4fd170c195eeb06. Jan 23 01:01:14.366853 containerd[1610]: time="2026-01-23T01:01:14.365640485Z" level=info msg="StartContainer for \"a4d89c1d01ff5729826d1ef86f1a615fb3eed6230927f24bd4fd170c195eeb06\" returns successfully" Jan 23 01:01:14.387882 systemd[1]: cri-containerd-a4d89c1d01ff5729826d1ef86f1a615fb3eed6230927f24bd4fd170c195eeb06.scope: Deactivated successfully. Jan 23 01:01:14.392336 containerd[1610]: time="2026-01-23T01:01:14.391729784Z" level=info msg="received container exit event container_id:\"a4d89c1d01ff5729826d1ef86f1a615fb3eed6230927f24bd4fd170c195eeb06\" id:\"a4d89c1d01ff5729826d1ef86f1a615fb3eed6230927f24bd4fd170c195eeb06\" pid:3512 exited_at:{seconds:1769130074 nanos:391151964}" Jan 23 01:01:14.413508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4d89c1d01ff5729826d1ef86f1a615fb3eed6230927f24bd4fd170c195eeb06-rootfs.mount: Deactivated successfully. Jan 23 01:01:14.731563 kubelet[2778]: E0123 01:01:14.731473 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:01:14.856621 containerd[1610]: time="2026-01-23T01:01:14.856265913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 01:01:14.887773 kubelet[2778]: I0123 01:01:14.887628 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5db4dd6f8c-zv8ts" podStartSLOduration=3.888329622 podStartE2EDuration="6.887608078s" podCreationTimestamp="2026-01-23 01:01:08 +0000 UTC" firstStartedPulling="2026-01-23 01:01:09.330461301 +0000 UTC m=+21.747094677" lastFinishedPulling="2026-01-23 01:01:12.329739747 +0000 UTC m=+24.746373133" observedRunningTime="2026-01-23 01:01:12.861273011 +0000 UTC m=+25.277906437" watchObservedRunningTime="2026-01-23 01:01:14.887608078 +0000 UTC m=+27.304241504" Jan 23 01:01:16.731446 kubelet[2778]: E0123 01:01:16.731353 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:01:18.583186 kubelet[2778]: I0123 01:01:18.583091 2778 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 01:01:18.731623 kubelet[2778]: E0123 01:01:18.731556 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:01:19.083537 containerd[1610]: time="2026-01-23T01:01:19.083478607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:19.084896 containerd[1610]: time="2026-01-23T01:01:19.084850037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 01:01:19.085790 containerd[1610]: time="2026-01-23T01:01:19.085701256Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:19.088281 containerd[1610]: time="2026-01-23T01:01:19.088219855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:19.089374 containerd[1610]: time="2026-01-23T01:01:19.089209325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.232740202s" Jan 23 01:01:19.089374 containerd[1610]: time="2026-01-23T01:01:19.089237355Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 01:01:19.092960 containerd[1610]: time="2026-01-23T01:01:19.092930904Z" level=info msg="CreateContainer within sandbox \"d6e63087727531102b124ce2fc9f8cbdefde6c3e44928e4a14ea87f26ad95833\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 01:01:19.102559 containerd[1610]: time="2026-01-23T01:01:19.101855881Z" level=info msg="Container 998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:01:19.120904 containerd[1610]: time="2026-01-23T01:01:19.120860874Z" level=info msg="CreateContainer within sandbox \"d6e63087727531102b124ce2fc9f8cbdefde6c3e44928e4a14ea87f26ad95833\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10\"" Jan 23 01:01:19.121421 containerd[1610]: time="2026-01-23T01:01:19.121381924Z" level=info msg="StartContainer for \"998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10\"" Jan 23 01:01:19.122911 containerd[1610]: time="2026-01-23T01:01:19.122874133Z" level=info msg="connecting to shim 998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10" address="unix:///run/containerd/s/9af8436cf1bf8d50d047b9ad083af31c66c7bbd81edd59b1eef7a8c472e758ab" protocol=ttrpc version=3 Jan 23 01:01:19.142870 systemd[1]: Started cri-containerd-998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10.scope - libcontainer container 998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10. Jan 23 01:01:19.217212 containerd[1610]: time="2026-01-23T01:01:19.217169299Z" level=info msg="StartContainer for \"998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10\" returns successfully" Jan 23 01:01:19.688192 containerd[1610]: time="2026-01-23T01:01:19.688145037Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 01:01:19.691280 systemd[1]: cri-containerd-998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10.scope: Deactivated successfully. Jan 23 01:01:19.691666 systemd[1]: cri-containerd-998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10.scope: Consumed 459ms CPU time, 193.1M memory peak, 171.3M written to disk. Jan 23 01:01:19.692489 containerd[1610]: time="2026-01-23T01:01:19.692457056Z" level=info msg="received container exit event container_id:\"998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10\" id:\"998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10\" pid:3569 exited_at:{seconds:1769130079 nanos:692061095}" Jan 23 01:01:19.719019 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-998834d55ae4d5c8d63e422da4ecbee96c8f62299bd5045e038ca58154ac5a10-rootfs.mount: Deactivated successfully. Jan 23 01:01:19.761380 kubelet[2778]: I0123 01:01:19.760924 2778 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 01:01:19.794660 systemd[1]: Created slice kubepods-burstable-pod3dcd0190_221a_4a7f_9f15_ae8db7886edb.slice - libcontainer container kubepods-burstable-pod3dcd0190_221a_4a7f_9f15_ae8db7886edb.slice. Jan 23 01:01:19.813640 systemd[1]: Created slice kubepods-besteffort-pod9a36a138_e7e1_4031_9559_4a28820b727d.slice - libcontainer container kubepods-besteffort-pod9a36a138_e7e1_4031_9559_4a28820b727d.slice. Jan 23 01:01:19.825060 systemd[1]: Created slice kubepods-besteffort-pod5d7dff36_d3fb_4b97_8324_ac6e2554491c.slice - libcontainer container kubepods-besteffort-pod5d7dff36_d3fb_4b97_8324_ac6e2554491c.slice. Jan 23 01:01:19.835132 systemd[1]: Created slice kubepods-besteffort-pod815fb343_a824_40f3_bf26_2f785b54ac4d.slice - libcontainer container kubepods-besteffort-pod815fb343_a824_40f3_bf26_2f785b54ac4d.slice. Jan 23 01:01:19.844231 systemd[1]: Created slice kubepods-besteffort-poda59d5fb0_f7af_4d9c_9241_42ed48cacc40.slice - libcontainer container kubepods-besteffort-poda59d5fb0_f7af_4d9c_9241_42ed48cacc40.slice. Jan 23 01:01:19.854615 systemd[1]: Created slice kubepods-burstable-pod7dee7e1d_f87d_44e7_ae0a_0511acc0e3f8.slice - libcontainer container kubepods-burstable-pod7dee7e1d_f87d_44e7_ae0a_0511acc0e3f8.slice. Jan 23 01:01:19.861997 systemd[1]: Created slice kubepods-besteffort-pod9345d4db_ab4d_4bde_b8b4_e031305112f9.slice - libcontainer container kubepods-besteffort-pod9345d4db_ab4d_4bde_b8b4_e031305112f9.slice. Jan 23 01:01:19.870814 containerd[1610]: time="2026-01-23T01:01:19.870778681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 01:01:19.936197 kubelet[2778]: I0123 01:01:19.936102 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/815fb343-a824-40f3-bf26-2f785b54ac4d-whisker-backend-key-pair\") pod \"whisker-58d47d7f66-7cwlj\" (UID: \"815fb343-a824-40f3-bf26-2f785b54ac4d\") " pod="calico-system/whisker-58d47d7f66-7cwlj" Jan 23 01:01:19.936197 kubelet[2778]: I0123 01:01:19.936175 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqllq\" (UniqueName: \"kubernetes.io/projected/5d7dff36-d3fb-4b97-8324-ac6e2554491c-kube-api-access-nqllq\") pod \"calico-kube-controllers-56555c56bc-bq2nx\" (UID: \"5d7dff36-d3fb-4b97-8324-ac6e2554491c\") " pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" Jan 23 01:01:19.936197 kubelet[2778]: I0123 01:01:19.936201 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/9345d4db-ab4d-4bde-b8b4-e031305112f9-goldmane-key-pair\") pod \"goldmane-666569f655-8rskh\" (UID: \"9345d4db-ab4d-4bde-b8b4-e031305112f9\") " pod="calico-system/goldmane-666569f655-8rskh" Jan 23 01:01:19.936451 kubelet[2778]: I0123 01:01:19.936221 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btf7l\" (UniqueName: \"kubernetes.io/projected/3dcd0190-221a-4a7f-9f15-ae8db7886edb-kube-api-access-btf7l\") pod \"coredns-674b8bbfcf-xc9r4\" (UID: \"3dcd0190-221a-4a7f-9f15-ae8db7886edb\") " pod="kube-system/coredns-674b8bbfcf-xc9r4" Jan 23 01:01:19.936451 kubelet[2778]: I0123 01:01:19.936243 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dcd0190-221a-4a7f-9f15-ae8db7886edb-config-volume\") pod \"coredns-674b8bbfcf-xc9r4\" (UID: \"3dcd0190-221a-4a7f-9f15-ae8db7886edb\") " pod="kube-system/coredns-674b8bbfcf-xc9r4" Jan 23 01:01:19.936451 kubelet[2778]: I0123 01:01:19.936262 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7nq7\" (UniqueName: \"kubernetes.io/projected/9a36a138-e7e1-4031-9559-4a28820b727d-kube-api-access-r7nq7\") pod \"calico-apiserver-6464b5f569-l7jwf\" (UID: \"9a36a138-e7e1-4031-9559-4a28820b727d\") " pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" Jan 23 01:01:19.936451 kubelet[2778]: I0123 01:01:19.936279 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8-config-volume\") pod \"coredns-674b8bbfcf-dz8md\" (UID: \"7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8\") " pod="kube-system/coredns-674b8bbfcf-dz8md" Jan 23 01:01:19.936451 kubelet[2778]: I0123 01:01:19.936300 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9345d4db-ab4d-4bde-b8b4-e031305112f9-config\") pod \"goldmane-666569f655-8rskh\" (UID: \"9345d4db-ab4d-4bde-b8b4-e031305112f9\") " pod="calico-system/goldmane-666569f655-8rskh" Jan 23 01:01:19.936618 kubelet[2778]: I0123 01:01:19.936319 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9a36a138-e7e1-4031-9559-4a28820b727d-calico-apiserver-certs\") pod \"calico-apiserver-6464b5f569-l7jwf\" (UID: \"9a36a138-e7e1-4031-9559-4a28820b727d\") " pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" Jan 23 01:01:19.936618 kubelet[2778]: I0123 01:01:19.936340 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a59d5fb0-f7af-4d9c-9241-42ed48cacc40-calico-apiserver-certs\") pod \"calico-apiserver-6464b5f569-5d4c5\" (UID: \"a59d5fb0-f7af-4d9c-9241-42ed48cacc40\") " pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" Jan 23 01:01:19.936618 kubelet[2778]: I0123 01:01:19.936364 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d7dff36-d3fb-4b97-8324-ac6e2554491c-tigera-ca-bundle\") pod \"calico-kube-controllers-56555c56bc-bq2nx\" (UID: \"5d7dff36-d3fb-4b97-8324-ac6e2554491c\") " pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" Jan 23 01:01:19.936618 kubelet[2778]: I0123 01:01:19.936383 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcfrx\" (UniqueName: \"kubernetes.io/projected/9345d4db-ab4d-4bde-b8b4-e031305112f9-kube-api-access-qcfrx\") pod \"goldmane-666569f655-8rskh\" (UID: \"9345d4db-ab4d-4bde-b8b4-e031305112f9\") " pod="calico-system/goldmane-666569f655-8rskh" Jan 23 01:01:19.936618 kubelet[2778]: I0123 01:01:19.936443 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-559vc\" (UniqueName: \"kubernetes.io/projected/7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8-kube-api-access-559vc\") pod \"coredns-674b8bbfcf-dz8md\" (UID: \"7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8\") " pod="kube-system/coredns-674b8bbfcf-dz8md" Jan 23 01:01:19.936825 kubelet[2778]: I0123 01:01:19.936464 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6kh9\" (UniqueName: \"kubernetes.io/projected/815fb343-a824-40f3-bf26-2f785b54ac4d-kube-api-access-c6kh9\") pod \"whisker-58d47d7f66-7cwlj\" (UID: \"815fb343-a824-40f3-bf26-2f785b54ac4d\") " pod="calico-system/whisker-58d47d7f66-7cwlj" Jan 23 01:01:19.936825 kubelet[2778]: I0123 01:01:19.936482 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dq6hb\" (UniqueName: \"kubernetes.io/projected/a59d5fb0-f7af-4d9c-9241-42ed48cacc40-kube-api-access-dq6hb\") pod \"calico-apiserver-6464b5f569-5d4c5\" (UID: \"a59d5fb0-f7af-4d9c-9241-42ed48cacc40\") " pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" Jan 23 01:01:19.936825 kubelet[2778]: I0123 01:01:19.936505 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/815fb343-a824-40f3-bf26-2f785b54ac4d-whisker-ca-bundle\") pod \"whisker-58d47d7f66-7cwlj\" (UID: \"815fb343-a824-40f3-bf26-2f785b54ac4d\") " pod="calico-system/whisker-58d47d7f66-7cwlj" Jan 23 01:01:19.936825 kubelet[2778]: I0123 01:01:19.936523 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9345d4db-ab4d-4bde-b8b4-e031305112f9-goldmane-ca-bundle\") pod \"goldmane-666569f655-8rskh\" (UID: \"9345d4db-ab4d-4bde-b8b4-e031305112f9\") " pod="calico-system/goldmane-666569f655-8rskh" Jan 23 01:01:20.104238 containerd[1610]: time="2026-01-23T01:01:20.103962474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xc9r4,Uid:3dcd0190-221a-4a7f-9f15-ae8db7886edb,Namespace:kube-system,Attempt:0,}" Jan 23 01:01:20.121621 containerd[1610]: time="2026-01-23T01:01:20.121568978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6464b5f569-l7jwf,Uid:9a36a138-e7e1-4031-9559-4a28820b727d,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:01:20.132918 containerd[1610]: time="2026-01-23T01:01:20.132708866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56555c56bc-bq2nx,Uid:5d7dff36-d3fb-4b97-8324-ac6e2554491c,Namespace:calico-system,Attempt:0,}" Jan 23 01:01:20.143168 containerd[1610]: time="2026-01-23T01:01:20.143136372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d47d7f66-7cwlj,Uid:815fb343-a824-40f3-bf26-2f785b54ac4d,Namespace:calico-system,Attempt:0,}" Jan 23 01:01:20.150541 containerd[1610]: time="2026-01-23T01:01:20.150500090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6464b5f569-5d4c5,Uid:a59d5fb0-f7af-4d9c-9241-42ed48cacc40,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:01:20.166099 containerd[1610]: time="2026-01-23T01:01:20.165089546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dz8md,Uid:7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8,Namespace:kube-system,Attempt:0,}" Jan 23 01:01:20.169955 containerd[1610]: time="2026-01-23T01:01:20.169913545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8rskh,Uid:9345d4db-ab4d-4bde-b8b4-e031305112f9,Namespace:calico-system,Attempt:0,}" Jan 23 01:01:20.259716 containerd[1610]: time="2026-01-23T01:01:20.259678459Z" level=error msg="Failed to destroy network for sandbox \"13a10432ead8bd60a7272a8f1aaafe4d77a5f068fec238b10f4d5da7afb8e626\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.264192 containerd[1610]: time="2026-01-23T01:01:20.264158617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xc9r4,Uid:3dcd0190-221a-4a7f-9f15-ae8db7886edb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"13a10432ead8bd60a7272a8f1aaafe4d77a5f068fec238b10f4d5da7afb8e626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.264596 kubelet[2778]: E0123 01:01:20.264520 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13a10432ead8bd60a7272a8f1aaafe4d77a5f068fec238b10f4d5da7afb8e626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.264596 kubelet[2778]: E0123 01:01:20.264594 2778 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13a10432ead8bd60a7272a8f1aaafe4d77a5f068fec238b10f4d5da7afb8e626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xc9r4" Jan 23 01:01:20.265123 kubelet[2778]: E0123 01:01:20.264619 2778 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13a10432ead8bd60a7272a8f1aaafe4d77a5f068fec238b10f4d5da7afb8e626\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-xc9r4" Jan 23 01:01:20.265123 kubelet[2778]: E0123 01:01:20.264656 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-xc9r4_kube-system(3dcd0190-221a-4a7f-9f15-ae8db7886edb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-xc9r4_kube-system(3dcd0190-221a-4a7f-9f15-ae8db7886edb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13a10432ead8bd60a7272a8f1aaafe4d77a5f068fec238b10f4d5da7afb8e626\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-xc9r4" podUID="3dcd0190-221a-4a7f-9f15-ae8db7886edb" Jan 23 01:01:20.278992 containerd[1610]: time="2026-01-23T01:01:20.278878763Z" level=error msg="Failed to destroy network for sandbox \"6d0b72be4d6a397b6b2c8acb6aa85b9fefd33880574176e18381f993ef3727e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.280528 containerd[1610]: time="2026-01-23T01:01:20.280490943Z" level=error msg="Failed to destroy network for sandbox \"cb7f947c73d4af3860f75110f409423dd11b5d64eea43cb14206a65cc3283bc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.281944 containerd[1610]: time="2026-01-23T01:01:20.281810913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6464b5f569-5d4c5,Uid:a59d5fb0-f7af-4d9c-9241-42ed48cacc40,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0b72be4d6a397b6b2c8acb6aa85b9fefd33880574176e18381f993ef3727e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.282350 kubelet[2778]: E0123 01:01:20.282317 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0b72be4d6a397b6b2c8acb6aa85b9fefd33880574176e18381f993ef3727e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.282396 kubelet[2778]: E0123 01:01:20.282369 2778 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0b72be4d6a397b6b2c8acb6aa85b9fefd33880574176e18381f993ef3727e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" Jan 23 01:01:20.282396 kubelet[2778]: E0123 01:01:20.282386 2778 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d0b72be4d6a397b6b2c8acb6aa85b9fefd33880574176e18381f993ef3727e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" Jan 23 01:01:20.282467 kubelet[2778]: E0123 01:01:20.282424 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6464b5f569-5d4c5_calico-apiserver(a59d5fb0-f7af-4d9c-9241-42ed48cacc40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6464b5f569-5d4c5_calico-apiserver(a59d5fb0-f7af-4d9c-9241-42ed48cacc40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d0b72be4d6a397b6b2c8acb6aa85b9fefd33880574176e18381f993ef3727e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:01:20.283092 containerd[1610]: time="2026-01-23T01:01:20.283067512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58d47d7f66-7cwlj,Uid:815fb343-a824-40f3-bf26-2f785b54ac4d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb7f947c73d4af3860f75110f409423dd11b5d64eea43cb14206a65cc3283bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.283531 kubelet[2778]: E0123 01:01:20.283488 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb7f947c73d4af3860f75110f409423dd11b5d64eea43cb14206a65cc3283bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.283585 kubelet[2778]: E0123 01:01:20.283537 2778 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb7f947c73d4af3860f75110f409423dd11b5d64eea43cb14206a65cc3283bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58d47d7f66-7cwlj" Jan 23 01:01:20.283585 kubelet[2778]: E0123 01:01:20.283552 2778 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb7f947c73d4af3860f75110f409423dd11b5d64eea43cb14206a65cc3283bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58d47d7f66-7cwlj" Jan 23 01:01:20.283636 kubelet[2778]: E0123 01:01:20.283585 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58d47d7f66-7cwlj_calico-system(815fb343-a824-40f3-bf26-2f785b54ac4d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58d47d7f66-7cwlj_calico-system(815fb343-a824-40f3-bf26-2f785b54ac4d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb7f947c73d4af3860f75110f409423dd11b5d64eea43cb14206a65cc3283bc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58d47d7f66-7cwlj" podUID="815fb343-a824-40f3-bf26-2f785b54ac4d" Jan 23 01:01:20.293703 containerd[1610]: time="2026-01-23T01:01:20.293615329Z" level=error msg="Failed to destroy network for sandbox \"62667d41f64450ac7fbfa8167acb880e56e58f097940b69e7a4f398f102c7d51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.296047 containerd[1610]: time="2026-01-23T01:01:20.296004549Z" level=error msg="Failed to destroy network for sandbox \"6faae41d45291f4cbdf67e3f32db5113ad79df575ada1d5ee8ded17f07979dbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.296414 containerd[1610]: time="2026-01-23T01:01:20.296390098Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56555c56bc-bq2nx,Uid:5d7dff36-d3fb-4b97-8324-ac6e2554491c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62667d41f64450ac7fbfa8167acb880e56e58f097940b69e7a4f398f102c7d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.296867 kubelet[2778]: E0123 01:01:20.296721 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62667d41f64450ac7fbfa8167acb880e56e58f097940b69e7a4f398f102c7d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.297209 kubelet[2778]: E0123 01:01:20.296995 2778 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62667d41f64450ac7fbfa8167acb880e56e58f097940b69e7a4f398f102c7d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" Jan 23 01:01:20.297209 kubelet[2778]: E0123 01:01:20.297064 2778 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62667d41f64450ac7fbfa8167acb880e56e58f097940b69e7a4f398f102c7d51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" Jan 23 01:01:20.297325 kubelet[2778]: E0123 01:01:20.297287 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-56555c56bc-bq2nx_calico-system(5d7dff36-d3fb-4b97-8324-ac6e2554491c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-56555c56bc-bq2nx_calico-system(5d7dff36-d3fb-4b97-8324-ac6e2554491c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62667d41f64450ac7fbfa8167acb880e56e58f097940b69e7a4f398f102c7d51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:01:20.298012 containerd[1610]: time="2026-01-23T01:01:20.297884878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8rskh,Uid:9345d4db-ab4d-4bde-b8b4-e031305112f9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6faae41d45291f4cbdf67e3f32db5113ad79df575ada1d5ee8ded17f07979dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.298542 kubelet[2778]: E0123 01:01:20.298422 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6faae41d45291f4cbdf67e3f32db5113ad79df575ada1d5ee8ded17f07979dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.298723 kubelet[2778]: E0123 01:01:20.298711 2778 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6faae41d45291f4cbdf67e3f32db5113ad79df575ada1d5ee8ded17f07979dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8rskh" Jan 23 01:01:20.300095 kubelet[2778]: E0123 01:01:20.298883 2778 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6faae41d45291f4cbdf67e3f32db5113ad79df575ada1d5ee8ded17f07979dbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-8rskh" Jan 23 01:01:20.300095 kubelet[2778]: E0123 01:01:20.300029 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-8rskh_calico-system(9345d4db-ab4d-4bde-b8b4-e031305112f9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-8rskh_calico-system(9345d4db-ab4d-4bde-b8b4-e031305112f9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6faae41d45291f4cbdf67e3f32db5113ad79df575ada1d5ee8ded17f07979dbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:01:20.306298 containerd[1610]: time="2026-01-23T01:01:20.306159115Z" level=error msg="Failed to destroy network for sandbox \"10defd1c08b2f44dba2d385769bbc303a96710591783fc9e1f0d01e35d11a8a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.308127 containerd[1610]: time="2026-01-23T01:01:20.308099165Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6464b5f569-l7jwf,Uid:9a36a138-e7e1-4031-9559-4a28820b727d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10defd1c08b2f44dba2d385769bbc303a96710591783fc9e1f0d01e35d11a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.308362 kubelet[2778]: E0123 01:01:20.308344 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10defd1c08b2f44dba2d385769bbc303a96710591783fc9e1f0d01e35d11a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.308667 kubelet[2778]: E0123 01:01:20.308419 2778 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10defd1c08b2f44dba2d385769bbc303a96710591783fc9e1f0d01e35d11a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" Jan 23 01:01:20.308667 kubelet[2778]: E0123 01:01:20.308436 2778 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10defd1c08b2f44dba2d385769bbc303a96710591783fc9e1f0d01e35d11a8a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" Jan 23 01:01:20.308667 kubelet[2778]: E0123 01:01:20.308468 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6464b5f569-l7jwf_calico-apiserver(9a36a138-e7e1-4031-9559-4a28820b727d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6464b5f569-l7jwf_calico-apiserver(9a36a138-e7e1-4031-9559-4a28820b727d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10defd1c08b2f44dba2d385769bbc303a96710591783fc9e1f0d01e35d11a8a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:01:20.309714 containerd[1610]: time="2026-01-23T01:01:20.309675384Z" level=error msg="Failed to destroy network for sandbox \"20e9e98b444cd966e27d2367272b943124c0681c2f69d0f9c7dbc02c57105769\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.310993 containerd[1610]: time="2026-01-23T01:01:20.310945294Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dz8md,Uid:7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20e9e98b444cd966e27d2367272b943124c0681c2f69d0f9c7dbc02c57105769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.311231 kubelet[2778]: E0123 01:01:20.311129 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20e9e98b444cd966e27d2367272b943124c0681c2f69d0f9c7dbc02c57105769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.311231 kubelet[2778]: E0123 01:01:20.311205 2778 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20e9e98b444cd966e27d2367272b943124c0681c2f69d0f9c7dbc02c57105769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dz8md" Jan 23 01:01:20.311300 kubelet[2778]: E0123 01:01:20.311218 2778 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20e9e98b444cd966e27d2367272b943124c0681c2f69d0f9c7dbc02c57105769\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dz8md" Jan 23 01:01:20.311300 kubelet[2778]: E0123 01:01:20.311277 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dz8md_kube-system(7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dz8md_kube-system(7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20e9e98b444cd966e27d2367272b943124c0681c2f69d0f9c7dbc02c57105769\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dz8md" podUID="7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8" Jan 23 01:01:20.745996 systemd[1]: Created slice kubepods-besteffort-pod5259dbbd_3b68_4b5b_9de4_297f13034dcd.slice - libcontainer container kubepods-besteffort-pod5259dbbd_3b68_4b5b_9de4_297f13034dcd.slice. Jan 23 01:01:20.751738 containerd[1610]: time="2026-01-23T01:01:20.751690376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wc28p,Uid:5259dbbd-3b68-4b5b-9de4-297f13034dcd,Namespace:calico-system,Attempt:0,}" Jan 23 01:01:20.866737 containerd[1610]: time="2026-01-23T01:01:20.866549733Z" level=error msg="Failed to destroy network for sandbox \"f800ba6642eeff396a9ccfc49f05586fe39328b028362bcd1a1126ce8db3a89d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.869456 containerd[1610]: time="2026-01-23T01:01:20.869218252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wc28p,Uid:5259dbbd-3b68-4b5b-9de4-297f13034dcd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f800ba6642eeff396a9ccfc49f05586fe39328b028362bcd1a1126ce8db3a89d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.870139 kubelet[2778]: E0123 01:01:20.869985 2778 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f800ba6642eeff396a9ccfc49f05586fe39328b028362bcd1a1126ce8db3a89d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 01:01:20.873142 kubelet[2778]: E0123 01:01:20.870216 2778 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f800ba6642eeff396a9ccfc49f05586fe39328b028362bcd1a1126ce8db3a89d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wc28p" Jan 23 01:01:20.873142 kubelet[2778]: E0123 01:01:20.870257 2778 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f800ba6642eeff396a9ccfc49f05586fe39328b028362bcd1a1126ce8db3a89d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wc28p" Jan 23 01:01:20.873142 kubelet[2778]: E0123 01:01:20.870329 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wc28p_calico-system(5259dbbd-3b68-4b5b-9de4-297f13034dcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wc28p_calico-system(5259dbbd-3b68-4b5b-9de4-297f13034dcd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f800ba6642eeff396a9ccfc49f05586fe39328b028362bcd1a1126ce8db3a89d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:01:21.104979 systemd[1]: run-netns-cni\x2d246e97ed\x2df236\x2da097\x2dbd97\x2daff982f5f7e4.mount: Deactivated successfully. Jan 23 01:01:21.105187 systemd[1]: run-netns-cni\x2dd03db500\x2d115c\x2d1236\x2de398\x2d785d2f063412.mount: Deactivated successfully. Jan 23 01:01:21.105423 systemd[1]: run-netns-cni\x2d081b500d\x2d9292\x2dfd1f\x2d0cde\x2d837ce4681737.mount: Deactivated successfully. Jan 23 01:01:21.105579 systemd[1]: run-netns-cni\x2dec83f88a\x2d81bb\x2d9904\x2d642d\x2d5c16005aea27.mount: Deactivated successfully. Jan 23 01:01:27.498816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273008448.mount: Deactivated successfully. Jan 23 01:01:27.539603 containerd[1610]: time="2026-01-23T01:01:27.539546454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:27.540834 containerd[1610]: time="2026-01-23T01:01:27.540674874Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 01:01:27.541770 containerd[1610]: time="2026-01-23T01:01:27.541717784Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:27.543376 containerd[1610]: time="2026-01-23T01:01:27.543351054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 01:01:27.543847 containerd[1610]: time="2026-01-23T01:01:27.543818754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 7.672835474s" Jan 23 01:01:27.543907 containerd[1610]: time="2026-01-23T01:01:27.543896895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 01:01:27.574418 containerd[1610]: time="2026-01-23T01:01:27.574370698Z" level=info msg="CreateContainer within sandbox \"d6e63087727531102b124ce2fc9f8cbdefde6c3e44928e4a14ea87f26ad95833\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 01:01:27.584879 containerd[1610]: time="2026-01-23T01:01:27.584847229Z" level=info msg="Container c9b5255bdfe5ded268f42728bc521aa2b2004f8d70e9f9bb2a313087d82c6537: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:01:27.594134 containerd[1610]: time="2026-01-23T01:01:27.594089120Z" level=info msg="CreateContainer within sandbox \"d6e63087727531102b124ce2fc9f8cbdefde6c3e44928e4a14ea87f26ad95833\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c9b5255bdfe5ded268f42728bc521aa2b2004f8d70e9f9bb2a313087d82c6537\"" Jan 23 01:01:27.594634 containerd[1610]: time="2026-01-23T01:01:27.594607630Z" level=info msg="StartContainer for \"c9b5255bdfe5ded268f42728bc521aa2b2004f8d70e9f9bb2a313087d82c6537\"" Jan 23 01:01:27.595730 containerd[1610]: time="2026-01-23T01:01:27.595701211Z" level=info msg="connecting to shim c9b5255bdfe5ded268f42728bc521aa2b2004f8d70e9f9bb2a313087d82c6537" address="unix:///run/containerd/s/9af8436cf1bf8d50d047b9ad083af31c66c7bbd81edd59b1eef7a8c472e758ab" protocol=ttrpc version=3 Jan 23 01:01:27.632872 systemd[1]: Started cri-containerd-c9b5255bdfe5ded268f42728bc521aa2b2004f8d70e9f9bb2a313087d82c6537.scope - libcontainer container c9b5255bdfe5ded268f42728bc521aa2b2004f8d70e9f9bb2a313087d82c6537. Jan 23 01:01:27.712334 containerd[1610]: time="2026-01-23T01:01:27.712298974Z" level=info msg="StartContainer for \"c9b5255bdfe5ded268f42728bc521aa2b2004f8d70e9f9bb2a313087d82c6537\" returns successfully" Jan 23 01:01:27.797830 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 01:01:27.797967 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 01:01:27.926770 kubelet[2778]: I0123 01:01:27.925856 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ln8zh" podStartSLOduration=0.868251571 podStartE2EDuration="18.92583984s" podCreationTimestamp="2026-01-23 01:01:09 +0000 UTC" firstStartedPulling="2026-01-23 01:01:09.487241515 +0000 UTC m=+21.903874891" lastFinishedPulling="2026-01-23 01:01:27.544829784 +0000 UTC m=+39.961463160" observedRunningTime="2026-01-23 01:01:27.924243539 +0000 UTC m=+40.340876925" watchObservedRunningTime="2026-01-23 01:01:27.92583984 +0000 UTC m=+40.342473216" Jan 23 01:01:28.092218 kubelet[2778]: I0123 01:01:28.091858 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/815fb343-a824-40f3-bf26-2f785b54ac4d-whisker-backend-key-pair\") pod \"815fb343-a824-40f3-bf26-2f785b54ac4d\" (UID: \"815fb343-a824-40f3-bf26-2f785b54ac4d\") " Jan 23 01:01:28.092547 kubelet[2778]: I0123 01:01:28.092512 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c6kh9\" (UniqueName: \"kubernetes.io/projected/815fb343-a824-40f3-bf26-2f785b54ac4d-kube-api-access-c6kh9\") pod \"815fb343-a824-40f3-bf26-2f785b54ac4d\" (UID: \"815fb343-a824-40f3-bf26-2f785b54ac4d\") " Jan 23 01:01:28.092583 kubelet[2778]: I0123 01:01:28.092547 2778 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/815fb343-a824-40f3-bf26-2f785b54ac4d-whisker-ca-bundle\") pod \"815fb343-a824-40f3-bf26-2f785b54ac4d\" (UID: \"815fb343-a824-40f3-bf26-2f785b54ac4d\") " Jan 23 01:01:28.094386 kubelet[2778]: I0123 01:01:28.094358 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/815fb343-a824-40f3-bf26-2f785b54ac4d-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "815fb343-a824-40f3-bf26-2f785b54ac4d" (UID: "815fb343-a824-40f3-bf26-2f785b54ac4d"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 01:01:28.099159 kubelet[2778]: I0123 01:01:28.099129 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/815fb343-a824-40f3-bf26-2f785b54ac4d-kube-api-access-c6kh9" (OuterVolumeSpecName: "kube-api-access-c6kh9") pod "815fb343-a824-40f3-bf26-2f785b54ac4d" (UID: "815fb343-a824-40f3-bf26-2f785b54ac4d"). InnerVolumeSpecName "kube-api-access-c6kh9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 01:01:28.099228 kubelet[2778]: I0123 01:01:28.099214 2778 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/815fb343-a824-40f3-bf26-2f785b54ac4d-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "815fb343-a824-40f3-bf26-2f785b54ac4d" (UID: "815fb343-a824-40f3-bf26-2f785b54ac4d"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 01:01:28.194069 kubelet[2778]: I0123 01:01:28.193968 2778 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/815fb343-a824-40f3-bf26-2f785b54ac4d-whisker-backend-key-pair\") on node \"ci-4459-2-2-n-ad7bb25bb9\" DevicePath \"\"" Jan 23 01:01:28.194069 kubelet[2778]: I0123 01:01:28.194039 2778 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c6kh9\" (UniqueName: \"kubernetes.io/projected/815fb343-a824-40f3-bf26-2f785b54ac4d-kube-api-access-c6kh9\") on node \"ci-4459-2-2-n-ad7bb25bb9\" DevicePath \"\"" Jan 23 01:01:28.194069 kubelet[2778]: I0123 01:01:28.194055 2778 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/815fb343-a824-40f3-bf26-2f785b54ac4d-whisker-ca-bundle\") on node \"ci-4459-2-2-n-ad7bb25bb9\" DevicePath \"\"" Jan 23 01:01:28.499288 systemd[1]: var-lib-kubelet-pods-815fb343\x2da824\x2d40f3\x2dbf26\x2d2f785b54ac4d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc6kh9.mount: Deactivated successfully. Jan 23 01:01:28.499855 systemd[1]: var-lib-kubelet-pods-815fb343\x2da824\x2d40f3\x2dbf26\x2d2f785b54ac4d-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 01:01:28.926433 systemd[1]: Removed slice kubepods-besteffort-pod815fb343_a824_40f3_bf26_2f785b54ac4d.slice - libcontainer container kubepods-besteffort-pod815fb343_a824_40f3_bf26_2f785b54ac4d.slice. Jan 23 01:01:29.010867 systemd[1]: Created slice kubepods-besteffort-pod7df44e0e_f60c_4fe5_bce1_8acbf408ec27.slice - libcontainer container kubepods-besteffort-pod7df44e0e_f60c_4fe5_bce1_8acbf408ec27.slice. Jan 23 01:01:29.202085 kubelet[2778]: I0123 01:01:29.201925 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94jx\" (UniqueName: \"kubernetes.io/projected/7df44e0e-f60c-4fe5-bce1-8acbf408ec27-kube-api-access-m94jx\") pod \"whisker-7fb57f75c6-jldrr\" (UID: \"7df44e0e-f60c-4fe5-bce1-8acbf408ec27\") " pod="calico-system/whisker-7fb57f75c6-jldrr" Jan 23 01:01:29.202085 kubelet[2778]: I0123 01:01:29.201963 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7df44e0e-f60c-4fe5-bce1-8acbf408ec27-whisker-backend-key-pair\") pod \"whisker-7fb57f75c6-jldrr\" (UID: \"7df44e0e-f60c-4fe5-bce1-8acbf408ec27\") " pod="calico-system/whisker-7fb57f75c6-jldrr" Jan 23 01:01:29.202085 kubelet[2778]: I0123 01:01:29.201975 2778 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7df44e0e-f60c-4fe5-bce1-8acbf408ec27-whisker-ca-bundle\") pod \"whisker-7fb57f75c6-jldrr\" (UID: \"7df44e0e-f60c-4fe5-bce1-8acbf408ec27\") " pod="calico-system/whisker-7fb57f75c6-jldrr" Jan 23 01:01:29.614426 containerd[1610]: time="2026-01-23T01:01:29.614175308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fb57f75c6-jldrr,Uid:7df44e0e-f60c-4fe5-bce1-8acbf408ec27,Namespace:calico-system,Attempt:0,}" Jan 23 01:01:29.658940 systemd-networkd[1490]: vxlan.calico: Link UP Jan 23 01:01:29.658955 systemd-networkd[1490]: vxlan.calico: Gained carrier Jan 23 01:01:29.735318 kubelet[2778]: I0123 01:01:29.735124 2778 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="815fb343-a824-40f3-bf26-2f785b54ac4d" path="/var/lib/kubelet/pods/815fb343-a824-40f3-bf26-2f785b54ac4d/volumes" Jan 23 01:01:29.804940 systemd-networkd[1490]: calia3f8805e618: Link UP Jan 23 01:01:29.805942 systemd-networkd[1490]: calia3f8805e618: Gained carrier Jan 23 01:01:29.820650 containerd[1610]: 2026-01-23 01:01:29.733 [INFO][4083] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0 whisker-7fb57f75c6- calico-system 7df44e0e-f60c-4fe5-bce1-8acbf408ec27 891 0 2026-01-23 01:01:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7fb57f75c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-2-n-ad7bb25bb9 whisker-7fb57f75c6-jldrr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia3f8805e618 [] [] }} ContainerID="88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" Namespace="calico-system" Pod="whisker-7fb57f75c6-jldrr" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-" Jan 23 01:01:29.820650 containerd[1610]: 2026-01-23 01:01:29.733 [INFO][4083] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" Namespace="calico-system" Pod="whisker-7fb57f75c6-jldrr" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0" Jan 23 01:01:29.820650 containerd[1610]: 2026-01-23 01:01:29.766 [INFO][4115] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" HandleID="k8s-pod-network.88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0" Jan 23 01:01:29.820852 containerd[1610]: 2026-01-23 01:01:29.766 [INFO][4115] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" HandleID="k8s-pod-network.88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024faf0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-n-ad7bb25bb9", "pod":"whisker-7fb57f75c6-jldrr", "timestamp":"2026-01-23 01:01:29.7662786 +0000 UTC"}, Hostname:"ci-4459-2-2-n-ad7bb25bb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:01:29.820852 containerd[1610]: 2026-01-23 01:01:29.766 [INFO][4115] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:01:29.820852 containerd[1610]: 2026-01-23 01:01:29.766 [INFO][4115] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:01:29.820852 containerd[1610]: 2026-01-23 01:01:29.766 [INFO][4115] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-ad7bb25bb9' Jan 23 01:01:29.820852 containerd[1610]: 2026-01-23 01:01:29.772 [INFO][4115] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:29.820852 containerd[1610]: 2026-01-23 01:01:29.776 [INFO][4115] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:29.820852 containerd[1610]: 2026-01-23 01:01:29.780 [INFO][4115] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:29.820852 containerd[1610]: 2026-01-23 01:01:29.781 [INFO][4115] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:29.820852 containerd[1610]: 2026-01-23 01:01:29.784 [INFO][4115] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:29.821010 containerd[1610]: 2026-01-23 01:01:29.784 [INFO][4115] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:29.821010 containerd[1610]: 2026-01-23 01:01:29.785 [INFO][4115] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea Jan 23 01:01:29.821010 containerd[1610]: 2026-01-23 01:01:29.789 [INFO][4115] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:29.821010 containerd[1610]: 2026-01-23 01:01:29.793 [INFO][4115] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.65/26] block=192.168.28.64/26 handle="k8s-pod-network.88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:29.821010 containerd[1610]: 2026-01-23 01:01:29.793 [INFO][4115] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.65/26] handle="k8s-pod-network.88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:29.821010 containerd[1610]: 2026-01-23 01:01:29.793 [INFO][4115] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:01:29.821010 containerd[1610]: 2026-01-23 01:01:29.793 [INFO][4115] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.65/26] IPv6=[] ContainerID="88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" HandleID="k8s-pod-network.88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0" Jan 23 01:01:29.821112 containerd[1610]: 2026-01-23 01:01:29.797 [INFO][4083] cni-plugin/k8s.go 418: Populated endpoint ContainerID="88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" Namespace="calico-system" Pod="whisker-7fb57f75c6-jldrr" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0", GenerateName:"whisker-7fb57f75c6-", Namespace:"calico-system", SelfLink:"", UID:"7df44e0e-f60c-4fe5-bce1-8acbf408ec27", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fb57f75c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"", Pod:"whisker-7fb57f75c6-jldrr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia3f8805e618", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:29.821112 containerd[1610]: 2026-01-23 01:01:29.797 [INFO][4083] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.65/32] ContainerID="88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" Namespace="calico-system" Pod="whisker-7fb57f75c6-jldrr" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0" Jan 23 01:01:29.821164 containerd[1610]: 2026-01-23 01:01:29.797 [INFO][4083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3f8805e618 ContainerID="88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" Namespace="calico-system" Pod="whisker-7fb57f75c6-jldrr" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0" Jan 23 01:01:29.821164 containerd[1610]: 2026-01-23 01:01:29.807 [INFO][4083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" Namespace="calico-system" Pod="whisker-7fb57f75c6-jldrr" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0" Jan 23 01:01:29.821199 containerd[1610]: 2026-01-23 01:01:29.807 [INFO][4083] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" Namespace="calico-system" Pod="whisker-7fb57f75c6-jldrr" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0", GenerateName:"whisker-7fb57f75c6-", Namespace:"calico-system", SelfLink:"", UID:"7df44e0e-f60c-4fe5-bce1-8acbf408ec27", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fb57f75c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea", Pod:"whisker-7fb57f75c6-jldrr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia3f8805e618", MAC:"3e:c3:ef:94:66:8c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:29.821235 containerd[1610]: 2026-01-23 01:01:29.816 [INFO][4083] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" Namespace="calico-system" Pod="whisker-7fb57f75c6-jldrr" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-whisker--7fb57f75c6--jldrr-eth0" Jan 23 01:01:29.858032 containerd[1610]: time="2026-01-23T01:01:29.857972038Z" level=info msg="connecting to shim 88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea" address="unix:///run/containerd/s/2d3b4f7e3c439b7557595c76e0f7ae76f46bbc60ae2787d5c885fb7f8b84e80c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:01:29.888948 systemd[1]: Started cri-containerd-88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea.scope - libcontainer container 88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea. Jan 23 01:01:29.976300 containerd[1610]: time="2026-01-23T01:01:29.976250223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fb57f75c6-jldrr,Uid:7df44e0e-f60c-4fe5-bce1-8acbf408ec27,Namespace:calico-system,Attempt:0,} returns sandbox id \"88d6f765366e72bbdb36ffa817f96cbac475652f4b4331c73136dc30b3d489ea\"" Jan 23 01:01:29.983011 containerd[1610]: time="2026-01-23T01:01:29.982981294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:01:30.412686 containerd[1610]: time="2026-01-23T01:01:30.412608978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:30.414144 containerd[1610]: time="2026-01-23T01:01:30.413996830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:01:30.414144 containerd[1610]: time="2026-01-23T01:01:30.414099370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:01:30.414595 kubelet[2778]: E0123 01:01:30.414249 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:01:30.414595 kubelet[2778]: E0123 01:01:30.414298 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:01:30.420062 kubelet[2778]: E0123 01:01:30.419990 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4e2be303283a4989a321aace85e4dbd6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m94jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb57f75c6-jldrr_calico-system(7df44e0e-f60c-4fe5-bce1-8acbf408ec27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:30.423241 containerd[1610]: time="2026-01-23T01:01:30.423150611Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:01:30.857409 containerd[1610]: time="2026-01-23T01:01:30.857345178Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:30.858793 containerd[1610]: time="2026-01-23T01:01:30.858682488Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:01:30.858793 containerd[1610]: time="2026-01-23T01:01:30.858717648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:01:30.859003 kubelet[2778]: E0123 01:01:30.858952 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:01:30.859094 kubelet[2778]: E0123 01:01:30.859013 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:01:30.859348 kubelet[2778]: E0123 01:01:30.859236 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m94jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb57f75c6-jldrr_calico-system(7df44e0e-f60c-4fe5-bce1-8acbf408ec27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:30.860932 kubelet[2778]: E0123 01:01:30.860858 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:01:30.912184 kubelet[2778]: E0123 01:01:30.912130 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:01:31.223985 systemd-networkd[1490]: calia3f8805e618: Gained IPv6LL Jan 23 01:01:31.416036 systemd-networkd[1490]: vxlan.calico: Gained IPv6LL Jan 23 01:01:31.734447 containerd[1610]: time="2026-01-23T01:01:31.733985335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8rskh,Uid:9345d4db-ab4d-4bde-b8b4-e031305112f9,Namespace:calico-system,Attempt:0,}" Jan 23 01:01:31.734447 containerd[1610]: time="2026-01-23T01:01:31.734108825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6464b5f569-5d4c5,Uid:a59d5fb0-f7af-4d9c-9241-42ed48cacc40,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:01:31.734447 containerd[1610]: time="2026-01-23T01:01:31.734008235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wc28p,Uid:5259dbbd-3b68-4b5b-9de4-297f13034dcd,Namespace:calico-system,Attempt:0,}" Jan 23 01:01:31.735313 containerd[1610]: time="2026-01-23T01:01:31.735257625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xc9r4,Uid:3dcd0190-221a-4a7f-9f15-ae8db7886edb,Namespace:kube-system,Attempt:0,}" Jan 23 01:01:31.914313 kubelet[2778]: E0123 01:01:31.913964 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:01:31.944902 systemd-networkd[1490]: calia5d33c1cda6: Link UP Jan 23 01:01:31.946366 systemd-networkd[1490]: calia5d33c1cda6: Gained carrier Jan 23 01:01:31.963250 containerd[1610]: 2026-01-23 01:01:31.847 [INFO][4239] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0 goldmane-666569f655- calico-system 9345d4db-ab4d-4bde-b8b4-e031305112f9 826 0 2026-01-23 01:01:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-2-n-ad7bb25bb9 goldmane-666569f655-8rskh eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia5d33c1cda6 [] [] }} ContainerID="5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" Namespace="calico-system" Pod="goldmane-666569f655-8rskh" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-" Jan 23 01:01:31.963250 containerd[1610]: 2026-01-23 01:01:31.847 [INFO][4239] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" Namespace="calico-system" Pod="goldmane-666569f655-8rskh" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0" Jan 23 01:01:31.963250 containerd[1610]: 2026-01-23 01:01:31.898 [INFO][4299] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" HandleID="k8s-pod-network.5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0" Jan 23 01:01:31.964001 containerd[1610]: 2026-01-23 01:01:31.898 [INFO][4299] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" HandleID="k8s-pod-network.5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf100), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-n-ad7bb25bb9", "pod":"goldmane-666569f655-8rskh", "timestamp":"2026-01-23 01:01:31.898300661 +0000 UTC"}, Hostname:"ci-4459-2-2-n-ad7bb25bb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:01:31.964001 containerd[1610]: 2026-01-23 01:01:31.898 [INFO][4299] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:01:31.964001 containerd[1610]: 2026-01-23 01:01:31.898 [INFO][4299] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:01:31.964001 containerd[1610]: 2026-01-23 01:01:31.898 [INFO][4299] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-ad7bb25bb9' Jan 23 01:01:31.964001 containerd[1610]: 2026-01-23 01:01:31.906 [INFO][4299] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:31.964001 containerd[1610]: 2026-01-23 01:01:31.910 [INFO][4299] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:31.964001 containerd[1610]: 2026-01-23 01:01:31.914 [INFO][4299] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:31.964001 containerd[1610]: 2026-01-23 01:01:31.917 [INFO][4299] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:31.964001 containerd[1610]: 2026-01-23 01:01:31.919 [INFO][4299] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:31.964370 containerd[1610]: 2026-01-23 01:01:31.919 [INFO][4299] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:31.964370 containerd[1610]: 2026-01-23 01:01:31.924 [INFO][4299] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588 Jan 23 01:01:31.964370 containerd[1610]: 2026-01-23 01:01:31.929 [INFO][4299] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:31.964370 containerd[1610]: 2026-01-23 01:01:31.935 [INFO][4299] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.66/26] block=192.168.28.64/26 handle="k8s-pod-network.5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:31.964370 containerd[1610]: 2026-01-23 01:01:31.935 [INFO][4299] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.66/26] handle="k8s-pod-network.5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:31.964370 containerd[1610]: 2026-01-23 01:01:31.935 [INFO][4299] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:01:31.964370 containerd[1610]: 2026-01-23 01:01:31.935 [INFO][4299] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.66/26] IPv6=[] ContainerID="5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" HandleID="k8s-pod-network.5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0" Jan 23 01:01:31.964493 containerd[1610]: 2026-01-23 01:01:31.939 [INFO][4239] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" Namespace="calico-system" Pod="goldmane-666569f655-8rskh" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9345d4db-ab4d-4bde-b8b4-e031305112f9", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"", Pod:"goldmane-666569f655-8rskh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia5d33c1cda6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:31.964536 containerd[1610]: 2026-01-23 01:01:31.939 [INFO][4239] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.66/32] ContainerID="5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" Namespace="calico-system" Pod="goldmane-666569f655-8rskh" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0" Jan 23 01:01:31.964536 containerd[1610]: 2026-01-23 01:01:31.939 [INFO][4239] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5d33c1cda6 ContainerID="5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" Namespace="calico-system" Pod="goldmane-666569f655-8rskh" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0" Jan 23 01:01:31.964536 containerd[1610]: 2026-01-23 01:01:31.945 [INFO][4239] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" Namespace="calico-system" Pod="goldmane-666569f655-8rskh" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0" Jan 23 01:01:31.964584 containerd[1610]: 2026-01-23 01:01:31.947 [INFO][4239] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" Namespace="calico-system" Pod="goldmane-666569f655-8rskh" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"9345d4db-ab4d-4bde-b8b4-e031305112f9", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588", Pod:"goldmane-666569f655-8rskh", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia5d33c1cda6", MAC:"9e:52:2f:a4:b9:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:31.964622 containerd[1610]: 2026-01-23 01:01:31.960 [INFO][4239] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" Namespace="calico-system" Pod="goldmane-666569f655-8rskh" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-goldmane--666569f655--8rskh-eth0" Jan 23 01:01:31.986094 containerd[1610]: time="2026-01-23T01:01:31.985629528Z" level=info msg="connecting to shim 5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588" address="unix:///run/containerd/s/b2594a9d9929501671afed055b81bc40c5008842f3aacd143745191c14716b6e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:01:32.006999 systemd[1]: Started cri-containerd-5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588.scope - libcontainer container 5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588. Jan 23 01:01:32.043665 systemd-networkd[1490]: cali7a80e3ff626: Link UP Jan 23 01:01:32.045712 systemd-networkd[1490]: cali7a80e3ff626: Gained carrier Jan 23 01:01:32.059883 containerd[1610]: 2026-01-23 01:01:31.855 [INFO][4251] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0 csi-node-driver- calico-system 5259dbbd-3b68-4b5b-9de4-297f13034dcd 705 0 2026-01-23 01:01:09 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-2-n-ad7bb25bb9 csi-node-driver-wc28p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7a80e3ff626 [] [] }} ContainerID="f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" Namespace="calico-system" Pod="csi-node-driver-wc28p" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-" Jan 23 01:01:32.059883 containerd[1610]: 2026-01-23 01:01:31.855 [INFO][4251] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" Namespace="calico-system" Pod="csi-node-driver-wc28p" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0" Jan 23 01:01:32.059883 containerd[1610]: 2026-01-23 01:01:31.897 [INFO][4304] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" HandleID="k8s-pod-network.f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0" Jan 23 01:01:32.060082 containerd[1610]: 2026-01-23 01:01:31.901 [INFO][4304] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" HandleID="k8s-pod-network.f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5870), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-n-ad7bb25bb9", "pod":"csi-node-driver-wc28p", "timestamp":"2026-01-23 01:01:31.89733088 +0000 UTC"}, Hostname:"ci-4459-2-2-n-ad7bb25bb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:01:32.060082 containerd[1610]: 2026-01-23 01:01:31.901 [INFO][4304] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:01:32.060082 containerd[1610]: 2026-01-23 01:01:31.935 [INFO][4304] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:01:32.060082 containerd[1610]: 2026-01-23 01:01:31.936 [INFO][4304] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-ad7bb25bb9' Jan 23 01:01:32.060082 containerd[1610]: 2026-01-23 01:01:32.008 [INFO][4304] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.060082 containerd[1610]: 2026-01-23 01:01:32.013 [INFO][4304] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.060082 containerd[1610]: 2026-01-23 01:01:32.018 [INFO][4304] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.060082 containerd[1610]: 2026-01-23 01:01:32.019 [INFO][4304] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.060082 containerd[1610]: 2026-01-23 01:01:32.021 [INFO][4304] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.060233 containerd[1610]: 2026-01-23 01:01:32.021 [INFO][4304] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.060233 containerd[1610]: 2026-01-23 01:01:32.023 [INFO][4304] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff Jan 23 01:01:32.060233 containerd[1610]: 2026-01-23 01:01:32.026 [INFO][4304] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.060233 containerd[1610]: 2026-01-23 01:01:32.030 [INFO][4304] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.67/26] block=192.168.28.64/26 handle="k8s-pod-network.f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.060233 containerd[1610]: 2026-01-23 01:01:32.030 [INFO][4304] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.67/26] handle="k8s-pod-network.f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.060233 containerd[1610]: 2026-01-23 01:01:32.031 [INFO][4304] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:01:32.060233 containerd[1610]: 2026-01-23 01:01:32.031 [INFO][4304] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.67/26] IPv6=[] ContainerID="f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" HandleID="k8s-pod-network.f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0" Jan 23 01:01:32.060375 containerd[1610]: 2026-01-23 01:01:32.033 [INFO][4251] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" Namespace="calico-system" Pod="csi-node-driver-wc28p" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5259dbbd-3b68-4b5b-9de4-297f13034dcd", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"", Pod:"csi-node-driver-wc28p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a80e3ff626", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:32.060420 containerd[1610]: 2026-01-23 01:01:32.033 [INFO][4251] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.67/32] ContainerID="f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" Namespace="calico-system" Pod="csi-node-driver-wc28p" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0" Jan 23 01:01:32.060420 containerd[1610]: 2026-01-23 01:01:32.033 [INFO][4251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7a80e3ff626 ContainerID="f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" Namespace="calico-system" Pod="csi-node-driver-wc28p" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0" Jan 23 01:01:32.060420 containerd[1610]: 2026-01-23 01:01:32.047 [INFO][4251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" Namespace="calico-system" Pod="csi-node-driver-wc28p" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0" Jan 23 01:01:32.060478 containerd[1610]: 2026-01-23 01:01:32.048 [INFO][4251] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" Namespace="calico-system" Pod="csi-node-driver-wc28p" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5259dbbd-3b68-4b5b-9de4-297f13034dcd", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff", Pod:"csi-node-driver-wc28p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7a80e3ff626", MAC:"4e:aa:ff:7d:02:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:32.060514 containerd[1610]: 2026-01-23 01:01:32.056 [INFO][4251] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" Namespace="calico-system" Pod="csi-node-driver-wc28p" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-csi--node--driver--wc28p-eth0" Jan 23 01:01:32.083552 containerd[1610]: time="2026-01-23T01:01:32.083513634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-8rskh,Uid:9345d4db-ab4d-4bde-b8b4-e031305112f9,Namespace:calico-system,Attempt:0,} returns sandbox id \"5c13a53acd8bb2025975c6aba65ff611b4dd02f8b058afa304068b8d5ef1c588\"" Jan 23 01:01:32.085722 containerd[1610]: time="2026-01-23T01:01:32.085703930Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:01:32.089537 containerd[1610]: time="2026-01-23T01:01:32.089509473Z" level=info msg="connecting to shim f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff" address="unix:///run/containerd/s/c4ce976f4b99927eaba0dcc174aabfa37ab086e8aeadb31bc6d00e81132eac34" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:01:32.112908 systemd[1]: Started cri-containerd-f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff.scope - libcontainer container f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff. Jan 23 01:01:32.149850 systemd-networkd[1490]: calia2d9da68f36: Link UP Jan 23 01:01:32.151084 systemd-networkd[1490]: calia2d9da68f36: Gained carrier Jan 23 01:01:32.157931 containerd[1610]: time="2026-01-23T01:01:32.157896688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wc28p,Uid:5259dbbd-3b68-4b5b-9de4-297f13034dcd,Namespace:calico-system,Attempt:0,} returns sandbox id \"f9b1e74aa51b810b472ce6d5410e7ba3e3f62dbcdfc80ecc188f464115fafbff\"" Jan 23 01:01:32.168785 containerd[1610]: 2026-01-23 01:01:31.833 [INFO][4241] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0 calico-apiserver-6464b5f569- calico-apiserver a59d5fb0-f7af-4d9c-9241-42ed48cacc40 824 0 2026-01-23 01:01:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6464b5f569 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-n-ad7bb25bb9 calico-apiserver-6464b5f569-5d4c5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia2d9da68f36 [] [] }} ContainerID="23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-5d4c5" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-" Jan 23 01:01:32.168785 containerd[1610]: 2026-01-23 01:01:31.833 [INFO][4241] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-5d4c5" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0" Jan 23 01:01:32.168785 containerd[1610]: 2026-01-23 01:01:31.902 [INFO][4292] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" HandleID="k8s-pod-network.23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0" Jan 23 01:01:32.169040 containerd[1610]: 2026-01-23 01:01:31.902 [INFO][4292] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" HandleID="k8s-pod-network.23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-n-ad7bb25bb9", "pod":"calico-apiserver-6464b5f569-5d4c5", "timestamp":"2026-01-23 01:01:31.90215657 +0000 UTC"}, Hostname:"ci-4459-2-2-n-ad7bb25bb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:01:32.169040 containerd[1610]: 2026-01-23 01:01:31.902 [INFO][4292] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:01:32.169040 containerd[1610]: 2026-01-23 01:01:32.031 [INFO][4292] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:01:32.169040 containerd[1610]: 2026-01-23 01:01:32.031 [INFO][4292] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-ad7bb25bb9' Jan 23 01:01:32.169040 containerd[1610]: 2026-01-23 01:01:32.108 [INFO][4292] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.169040 containerd[1610]: 2026-01-23 01:01:32.114 [INFO][4292] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.169040 containerd[1610]: 2026-01-23 01:01:32.119 [INFO][4292] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.169040 containerd[1610]: 2026-01-23 01:01:32.121 [INFO][4292] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.169040 containerd[1610]: 2026-01-23 01:01:32.122 [INFO][4292] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.169228 containerd[1610]: 2026-01-23 01:01:32.123 [INFO][4292] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.169228 containerd[1610]: 2026-01-23 01:01:32.124 [INFO][4292] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb Jan 23 01:01:32.169228 containerd[1610]: 2026-01-23 01:01:32.128 [INFO][4292] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.169228 containerd[1610]: 2026-01-23 01:01:32.135 [INFO][4292] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.68/26] block=192.168.28.64/26 handle="k8s-pod-network.23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.169228 containerd[1610]: 2026-01-23 01:01:32.135 [INFO][4292] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.68/26] handle="k8s-pod-network.23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.169228 containerd[1610]: 2026-01-23 01:01:32.135 [INFO][4292] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:01:32.169228 containerd[1610]: 2026-01-23 01:01:32.135 [INFO][4292] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.68/26] IPv6=[] ContainerID="23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" HandleID="k8s-pod-network.23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0" Jan 23 01:01:32.169333 containerd[1610]: 2026-01-23 01:01:32.142 [INFO][4241] cni-plugin/k8s.go 418: Populated endpoint ContainerID="23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-5d4c5" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0", GenerateName:"calico-apiserver-6464b5f569-", Namespace:"calico-apiserver", SelfLink:"", UID:"a59d5fb0-f7af-4d9c-9241-42ed48cacc40", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6464b5f569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"", Pod:"calico-apiserver-6464b5f569-5d4c5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2d9da68f36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:32.169375 containerd[1610]: 2026-01-23 01:01:32.142 [INFO][4241] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.68/32] ContainerID="23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-5d4c5" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0" Jan 23 01:01:32.169375 containerd[1610]: 2026-01-23 01:01:32.142 [INFO][4241] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2d9da68f36 ContainerID="23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-5d4c5" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0" Jan 23 01:01:32.169375 containerd[1610]: 2026-01-23 01:01:32.150 [INFO][4241] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-5d4c5" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0" Jan 23 01:01:32.169422 containerd[1610]: 2026-01-23 01:01:32.152 [INFO][4241] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-5d4c5" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0", GenerateName:"calico-apiserver-6464b5f569-", Namespace:"calico-apiserver", SelfLink:"", UID:"a59d5fb0-f7af-4d9c-9241-42ed48cacc40", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6464b5f569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb", Pod:"calico-apiserver-6464b5f569-5d4c5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia2d9da68f36", MAC:"aa:6a:98:f7:78:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:32.169462 containerd[1610]: 2026-01-23 01:01:32.165 [INFO][4241] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-5d4c5" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--5d4c5-eth0" Jan 23 01:01:32.186825 containerd[1610]: time="2026-01-23T01:01:32.185991274Z" level=info msg="connecting to shim 23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb" address="unix:///run/containerd/s/d603391fcbe629913f70928b89bb7611de4357c503fe6ac057c9ce332d0f8bf8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:01:32.203914 systemd[1]: Started cri-containerd-23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb.scope - libcontainer container 23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb. Jan 23 01:01:32.253760 systemd-networkd[1490]: caliedf4f32b4fc: Link UP Jan 23 01:01:32.256690 systemd-networkd[1490]: caliedf4f32b4fc: Gained carrier Jan 23 01:01:32.282064 containerd[1610]: time="2026-01-23T01:01:32.280288298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6464b5f569-5d4c5,Uid:a59d5fb0-f7af-4d9c-9241-42ed48cacc40,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"23e4954dee9252aa9483b85565d8ca5ce2d64237db178da09505b559b95035fb\"" Jan 23 01:01:32.285354 containerd[1610]: 2026-01-23 01:01:31.843 [INFO][4258] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0 coredns-674b8bbfcf- kube-system 3dcd0190-221a-4a7f-9f15-ae8db7886edb 815 0 2026-01-23 01:00:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-n-ad7bb25bb9 coredns-674b8bbfcf-xc9r4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliedf4f32b4fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xc9r4" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-" Jan 23 01:01:32.285354 containerd[1610]: 2026-01-23 01:01:31.844 [INFO][4258] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xc9r4" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0" Jan 23 01:01:32.285354 containerd[1610]: 2026-01-23 01:01:31.904 [INFO][4297] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" HandleID="k8s-pod-network.34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0" Jan 23 01:01:32.285516 containerd[1610]: 2026-01-23 01:01:31.904 [INFO][4297] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" HandleID="k8s-pod-network.34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037db90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-n-ad7bb25bb9", "pod":"coredns-674b8bbfcf-xc9r4", "timestamp":"2026-01-23 01:01:31.90421428 +0000 UTC"}, Hostname:"ci-4459-2-2-n-ad7bb25bb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:01:32.285516 containerd[1610]: 2026-01-23 01:01:31.904 [INFO][4297] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:01:32.285516 containerd[1610]: 2026-01-23 01:01:32.136 [INFO][4297] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:01:32.285516 containerd[1610]: 2026-01-23 01:01:32.136 [INFO][4297] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-ad7bb25bb9' Jan 23 01:01:32.285516 containerd[1610]: 2026-01-23 01:01:32.208 [INFO][4297] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.285516 containerd[1610]: 2026-01-23 01:01:32.214 [INFO][4297] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.285516 containerd[1610]: 2026-01-23 01:01:32.219 [INFO][4297] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.285516 containerd[1610]: 2026-01-23 01:01:32.221 [INFO][4297] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.285516 containerd[1610]: 2026-01-23 01:01:32.223 [INFO][4297] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.285667 containerd[1610]: 2026-01-23 01:01:32.223 [INFO][4297] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.285667 containerd[1610]: 2026-01-23 01:01:32.225 [INFO][4297] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0 Jan 23 01:01:32.285667 containerd[1610]: 2026-01-23 01:01:32.239 [INFO][4297] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.285667 containerd[1610]: 2026-01-23 01:01:32.244 [INFO][4297] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.69/26] block=192.168.28.64/26 handle="k8s-pod-network.34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.285667 containerd[1610]: 2026-01-23 01:01:32.244 [INFO][4297] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.69/26] handle="k8s-pod-network.34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.285667 containerd[1610]: 2026-01-23 01:01:32.245 [INFO][4297] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:01:32.285667 containerd[1610]: 2026-01-23 01:01:32.245 [INFO][4297] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.69/26] IPv6=[] ContainerID="34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" HandleID="k8s-pod-network.34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0" Jan 23 01:01:32.285803 containerd[1610]: 2026-01-23 01:01:32.248 [INFO][4258] cni-plugin/k8s.go 418: Populated endpoint ContainerID="34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xc9r4" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3dcd0190-221a-4a7f-9f15-ae8db7886edb", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 0, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"", Pod:"coredns-674b8bbfcf-xc9r4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedf4f32b4fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:32.285803 containerd[1610]: 2026-01-23 01:01:32.248 [INFO][4258] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.69/32] ContainerID="34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xc9r4" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0" Jan 23 01:01:32.285803 containerd[1610]: 2026-01-23 01:01:32.248 [INFO][4258] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliedf4f32b4fc ContainerID="34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xc9r4" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0" Jan 23 01:01:32.285803 containerd[1610]: 2026-01-23 01:01:32.260 [INFO][4258] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xc9r4" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0" Jan 23 01:01:32.285803 containerd[1610]: 2026-01-23 01:01:32.261 [INFO][4258] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xc9r4" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"3dcd0190-221a-4a7f-9f15-ae8db7886edb", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 0, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0", Pod:"coredns-674b8bbfcf-xc9r4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliedf4f32b4fc", MAC:"26:21:51:8e:d8:af", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:32.285803 containerd[1610]: 2026-01-23 01:01:32.277 [INFO][4258] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" Namespace="kube-system" Pod="coredns-674b8bbfcf-xc9r4" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--xc9r4-eth0" Jan 23 01:01:32.304190 containerd[1610]: time="2026-01-23T01:01:32.304126921Z" level=info msg="connecting to shim 34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0" address="unix:///run/containerd/s/f1e3fd314f2925ab713d894bb83af3f37a19d985c03a6eadcf7f203266b1b547" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:01:32.325927 systemd[1]: Started cri-containerd-34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0.scope - libcontainer container 34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0. Jan 23 01:01:32.370400 containerd[1610]: time="2026-01-23T01:01:32.370339321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-xc9r4,Uid:3dcd0190-221a-4a7f-9f15-ae8db7886edb,Namespace:kube-system,Attempt:0,} returns sandbox id \"34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0\"" Jan 23 01:01:32.375453 containerd[1610]: time="2026-01-23T01:01:32.375001561Z" level=info msg="CreateContainer within sandbox \"34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:01:32.383708 containerd[1610]: time="2026-01-23T01:01:32.383671604Z" level=info msg="Container 6a87d2370ce822b124b7221fe5b759636d3a56e218f3a72d63dc06df982ac0ef: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:01:32.387189 containerd[1610]: time="2026-01-23T01:01:32.387130977Z" level=info msg="CreateContainer within sandbox \"34da832252903c652aa9cce377fc7334cdd0154f4f09e2e3e3b41fa9fc7597b0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a87d2370ce822b124b7221fe5b759636d3a56e218f3a72d63dc06df982ac0ef\"" Jan 23 01:01:32.388351 containerd[1610]: time="2026-01-23T01:01:32.387629777Z" level=info msg="StartContainer for \"6a87d2370ce822b124b7221fe5b759636d3a56e218f3a72d63dc06df982ac0ef\"" Jan 23 01:01:32.388351 containerd[1610]: time="2026-01-23T01:01:32.388196345Z" level=info msg="connecting to shim 6a87d2370ce822b124b7221fe5b759636d3a56e218f3a72d63dc06df982ac0ef" address="unix:///run/containerd/s/f1e3fd314f2925ab713d894bb83af3f37a19d985c03a6eadcf7f203266b1b547" protocol=ttrpc version=3 Jan 23 01:01:32.404870 systemd[1]: Started cri-containerd-6a87d2370ce822b124b7221fe5b759636d3a56e218f3a72d63dc06df982ac0ef.scope - libcontainer container 6a87d2370ce822b124b7221fe5b759636d3a56e218f3a72d63dc06df982ac0ef. Jan 23 01:01:32.429838 containerd[1610]: time="2026-01-23T01:01:32.429796584Z" level=info msg="StartContainer for \"6a87d2370ce822b124b7221fe5b759636d3a56e218f3a72d63dc06df982ac0ef\" returns successfully" Jan 23 01:01:32.535555 containerd[1610]: time="2026-01-23T01:01:32.535495916Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:32.536905 containerd[1610]: time="2026-01-23T01:01:32.536841223Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:01:32.537147 containerd[1610]: time="2026-01-23T01:01:32.537020843Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:01:32.537731 kubelet[2778]: E0123 01:01:32.537648 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:01:32.537985 kubelet[2778]: E0123 01:01:32.537934 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:01:32.538605 containerd[1610]: time="2026-01-23T01:01:32.538531450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:01:32.538723 kubelet[2778]: E0123 01:01:32.538556 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcfrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8rskh_calico-system(9345d4db-ab4d-4bde-b8b4-e031305112f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:32.540535 kubelet[2778]: E0123 01:01:32.540493 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:01:32.733544 containerd[1610]: time="2026-01-23T01:01:32.733423117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6464b5f569-l7jwf,Uid:9a36a138-e7e1-4031-9559-4a28820b727d,Namespace:calico-apiserver,Attempt:0,}" Jan 23 01:01:32.875060 systemd-networkd[1490]: cali916ee47caa0: Link UP Jan 23 01:01:32.876406 systemd-networkd[1490]: cali916ee47caa0: Gained carrier Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.790 [INFO][4574] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0 calico-apiserver-6464b5f569- calico-apiserver 9a36a138-e7e1-4031-9559-4a28820b727d 820 0 2026-01-23 01:01:03 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6464b5f569 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-2-n-ad7bb25bb9 calico-apiserver-6464b5f569-l7jwf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali916ee47caa0 [] [] }} ContainerID="58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-l7jwf" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.790 [INFO][4574] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-l7jwf" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.824 [INFO][4587] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" HandleID="k8s-pod-network.58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.824 [INFO][4587] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" HandleID="k8s-pod-network.58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb900), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459-2-2-n-ad7bb25bb9", "pod":"calico-apiserver-6464b5f569-l7jwf", "timestamp":"2026-01-23 01:01:32.824303518 +0000 UTC"}, Hostname:"ci-4459-2-2-n-ad7bb25bb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.824 [INFO][4587] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.825 [INFO][4587] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.825 [INFO][4587] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-ad7bb25bb9' Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.835 [INFO][4587] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.843 [INFO][4587] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.848 [INFO][4587] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.850 [INFO][4587] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.853 [INFO][4587] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.853 [INFO][4587] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.854 [INFO][4587] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1 Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.860 [INFO][4587] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.866 [INFO][4587] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.70/26] block=192.168.28.64/26 handle="k8s-pod-network.58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.866 [INFO][4587] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.70/26] handle="k8s-pod-network.58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.866 [INFO][4587] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:01:32.899081 containerd[1610]: 2026-01-23 01:01:32.866 [INFO][4587] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.70/26] IPv6=[] ContainerID="58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" HandleID="k8s-pod-network.58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0" Jan 23 01:01:32.900981 containerd[1610]: 2026-01-23 01:01:32.870 [INFO][4574] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-l7jwf" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0", GenerateName:"calico-apiserver-6464b5f569-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a36a138-e7e1-4031-9559-4a28820b727d", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6464b5f569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"", Pod:"calico-apiserver-6464b5f569-l7jwf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali916ee47caa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:32.900981 containerd[1610]: 2026-01-23 01:01:32.870 [INFO][4574] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.70/32] ContainerID="58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-l7jwf" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0" Jan 23 01:01:32.900981 containerd[1610]: 2026-01-23 01:01:32.870 [INFO][4574] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali916ee47caa0 ContainerID="58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-l7jwf" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0" Jan 23 01:01:32.900981 containerd[1610]: 2026-01-23 01:01:32.879 [INFO][4574] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-l7jwf" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0" Jan 23 01:01:32.900981 containerd[1610]: 2026-01-23 01:01:32.880 [INFO][4574] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-l7jwf" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0", GenerateName:"calico-apiserver-6464b5f569-", Namespace:"calico-apiserver", SelfLink:"", UID:"9a36a138-e7e1-4031-9559-4a28820b727d", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6464b5f569", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1", Pod:"calico-apiserver-6464b5f569-l7jwf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali916ee47caa0", MAC:"2a:be:ec:6e:06:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:32.900981 containerd[1610]: 2026-01-23 01:01:32.894 [INFO][4574] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" Namespace="calico-apiserver" Pod="calico-apiserver-6464b5f569-l7jwf" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--apiserver--6464b5f569--l7jwf-eth0" Jan 23 01:01:32.932339 containerd[1610]: time="2026-01-23T01:01:32.931623657Z" level=info msg="connecting to shim 58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1" address="unix:///run/containerd/s/c40351cbd3d6549059702d10cf5626aa670e07e7ec82152ffb9244e30799de98" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:01:32.941911 kubelet[2778]: E0123 01:01:32.936145 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:01:32.970943 containerd[1610]: time="2026-01-23T01:01:32.970846251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:32.972043 containerd[1610]: time="2026-01-23T01:01:32.972007708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:01:32.972264 containerd[1610]: time="2026-01-23T01:01:32.972249908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:01:32.972650 kubelet[2778]: E0123 01:01:32.972605 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:01:32.972701 kubelet[2778]: E0123 01:01:32.972655 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:01:32.973442 containerd[1610]: time="2026-01-23T01:01:32.973400335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:01:32.973881 systemd[1]: Started cri-containerd-58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1.scope - libcontainer container 58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1. Jan 23 01:01:32.976733 kubelet[2778]: E0123 01:01:32.976661 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2968,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc28p_calico-system(5259dbbd-3b68-4b5b-9de4-297f13034dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:32.977585 kubelet[2778]: I0123 01:01:32.977337 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-xc9r4" podStartSLOduration=38.977324757 podStartE2EDuration="38.977324757s" podCreationTimestamp="2026-01-23 01:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:01:32.976957448 +0000 UTC m=+45.393590874" watchObservedRunningTime="2026-01-23 01:01:32.977324757 +0000 UTC m=+45.393958133" Jan 23 01:01:33.032309 containerd[1610]: time="2026-01-23T01:01:33.032241542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6464b5f569-l7jwf,Uid:9a36a138-e7e1-4031-9559-4a28820b727d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"58a7e52e40e37c17cd30fbd3c25917f29a6d055d75db3d676160fb34858ee5e1\"" Jan 23 01:01:33.402606 containerd[1610]: time="2026-01-23T01:01:33.402563952Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:33.404075 containerd[1610]: time="2026-01-23T01:01:33.404021639Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:01:33.404638 containerd[1610]: time="2026-01-23T01:01:33.404098679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:01:33.404696 kubelet[2778]: E0123 01:01:33.404337 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:33.404696 kubelet[2778]: E0123 01:01:33.404396 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:33.404696 kubelet[2778]: E0123 01:01:33.404597 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dq6hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6464b5f569-5d4c5_calico-apiserver(a59d5fb0-f7af-4d9c-9241-42ed48cacc40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:33.405997 containerd[1610]: time="2026-01-23T01:01:33.405287947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:01:33.406094 kubelet[2778]: E0123 01:01:33.406011 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:01:33.528195 systemd-networkd[1490]: cali7a80e3ff626: Gained IPv6LL Jan 23 01:01:33.656390 systemd-networkd[1490]: calia2d9da68f36: Gained IPv6LL Jan 23 01:01:33.720083 systemd-networkd[1490]: calia5d33c1cda6: Gained IPv6LL Jan 23 01:01:33.721166 systemd-networkd[1490]: caliedf4f32b4fc: Gained IPv6LL Jan 23 01:01:33.733571 containerd[1610]: time="2026-01-23T01:01:33.733523327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dz8md,Uid:7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8,Namespace:kube-system,Attempt:0,}" Jan 23 01:01:33.820905 containerd[1610]: time="2026-01-23T01:01:33.820794963Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:33.822882 containerd[1610]: time="2026-01-23T01:01:33.822847828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:01:33.823249 containerd[1610]: time="2026-01-23T01:01:33.822956028Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:01:33.823807 kubelet[2778]: E0123 01:01:33.823564 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:01:33.824095 kubelet[2778]: E0123 01:01:33.824053 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:01:33.824352 kubelet[2778]: E0123 01:01:33.824315 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2968,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc28p_calico-system(5259dbbd-3b68-4b5b-9de4-297f13034dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:33.825035 containerd[1610]: time="2026-01-23T01:01:33.824999154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:01:33.826388 kubelet[2778]: E0123 01:01:33.826006 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:01:33.836274 systemd-networkd[1490]: cali2cbd7b74cda: Link UP Jan 23 01:01:33.836464 systemd-networkd[1490]: cali2cbd7b74cda: Gained carrier Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.772 [INFO][4652] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0 coredns-674b8bbfcf- kube-system 7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8 825 0 2026-01-23 01:00:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-2-n-ad7bb25bb9 coredns-674b8bbfcf-dz8md eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2cbd7b74cda [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz8md" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.772 [INFO][4652] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz8md" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.800 [INFO][4664] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" HandleID="k8s-pod-network.ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.801 [INFO][4664] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" HandleID="k8s-pod-network.ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf200), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-2-n-ad7bb25bb9", "pod":"coredns-674b8bbfcf-dz8md", "timestamp":"2026-01-23 01:01:33.80079952 +0000 UTC"}, Hostname:"ci-4459-2-2-n-ad7bb25bb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.801 [INFO][4664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.801 [INFO][4664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.801 [INFO][4664] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-ad7bb25bb9' Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.806 [INFO][4664] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.810 [INFO][4664] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.813 [INFO][4664] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.815 [INFO][4664] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.817 [INFO][4664] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.817 [INFO][4664] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.818 [INFO][4664] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448 Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.823 [INFO][4664] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.830 [INFO][4664] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.71/26] block=192.168.28.64/26 handle="k8s-pod-network.ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.830 [INFO][4664] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.71/26] handle="k8s-pod-network.ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.831 [INFO][4664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:01:33.852012 containerd[1610]: 2026-01-23 01:01:33.831 [INFO][4664] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.71/26] IPv6=[] ContainerID="ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" HandleID="k8s-pod-network.ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0" Jan 23 01:01:33.853720 containerd[1610]: 2026-01-23 01:01:33.833 [INFO][4652] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz8md" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 0, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"", Pod:"coredns-674b8bbfcf-dz8md", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2cbd7b74cda", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:33.853720 containerd[1610]: 2026-01-23 01:01:33.833 [INFO][4652] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.71/32] ContainerID="ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz8md" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0" Jan 23 01:01:33.853720 containerd[1610]: 2026-01-23 01:01:33.833 [INFO][4652] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cbd7b74cda ContainerID="ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz8md" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0" Jan 23 01:01:33.853720 containerd[1610]: 2026-01-23 01:01:33.836 [INFO][4652] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz8md" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0" Jan 23 01:01:33.853720 containerd[1610]: 2026-01-23 01:01:33.837 [INFO][4652] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz8md" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 0, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448", Pod:"coredns-674b8bbfcf-dz8md", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2cbd7b74cda", MAC:"a6:c7:d3:f6:dd:0e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:33.853720 containerd[1610]: 2026-01-23 01:01:33.849 [INFO][4652] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" Namespace="kube-system" Pod="coredns-674b8bbfcf-dz8md" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-coredns--674b8bbfcf--dz8md-eth0" Jan 23 01:01:33.880075 containerd[1610]: time="2026-01-23T01:01:33.880027891Z" level=info msg="connecting to shim ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448" address="unix:///run/containerd/s/7a01026636c49d01a767dfdf87a3176dc8905a3b9f3d2fbadfd51671ba37394c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:01:33.908901 systemd[1]: Started cri-containerd-ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448.scope - libcontainer container ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448. Jan 23 01:01:33.958557 kubelet[2778]: E0123 01:01:33.958525 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:01:33.959720 kubelet[2778]: E0123 01:01:33.959388 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:01:33.959720 kubelet[2778]: E0123 01:01:33.958525 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:01:33.966061 containerd[1610]: time="2026-01-23T01:01:33.966024268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dz8md,Uid:7dee7e1d-f87d-44e7-ae0a-0511acc0e3f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448\"" Jan 23 01:01:33.971119 containerd[1610]: time="2026-01-23T01:01:33.971086199Z" level=info msg="CreateContainer within sandbox \"ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 01:01:33.984722 containerd[1610]: time="2026-01-23T01:01:33.984665423Z" level=info msg="Container 7215e5a38c8530ae0dff1703991e1a2f6d67320b970b0110a6ceec2ee7078247: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:01:33.995989 containerd[1610]: time="2026-01-23T01:01:33.995951562Z" level=info msg="CreateContainer within sandbox \"ee6152d358ce846a87f1be2d1fa27b83eb977551d1ef26d6bc5e2b8786774448\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7215e5a38c8530ae0dff1703991e1a2f6d67320b970b0110a6ceec2ee7078247\"" Jan 23 01:01:33.997081 containerd[1610]: time="2026-01-23T01:01:33.996351351Z" level=info msg="StartContainer for \"7215e5a38c8530ae0dff1703991e1a2f6d67320b970b0110a6ceec2ee7078247\"" Jan 23 01:01:33.997081 containerd[1610]: time="2026-01-23T01:01:33.996975739Z" level=info msg="connecting to shim 7215e5a38c8530ae0dff1703991e1a2f6d67320b970b0110a6ceec2ee7078247" address="unix:///run/containerd/s/7a01026636c49d01a767dfdf87a3176dc8905a3b9f3d2fbadfd51671ba37394c" protocol=ttrpc version=3 Jan 23 01:01:34.021911 systemd[1]: Started cri-containerd-7215e5a38c8530ae0dff1703991e1a2f6d67320b970b0110a6ceec2ee7078247.scope - libcontainer container 7215e5a38c8530ae0dff1703991e1a2f6d67320b970b0110a6ceec2ee7078247. Jan 23 01:01:34.050229 containerd[1610]: time="2026-01-23T01:01:34.050193783Z" level=info msg="StartContainer for \"7215e5a38c8530ae0dff1703991e1a2f6d67320b970b0110a6ceec2ee7078247\" returns successfully" Jan 23 01:01:34.232223 systemd-networkd[1490]: cali916ee47caa0: Gained IPv6LL Jan 23 01:01:34.259735 containerd[1610]: time="2026-01-23T01:01:34.259661992Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:34.261891 containerd[1610]: time="2026-01-23T01:01:34.261781169Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:01:34.262091 containerd[1610]: time="2026-01-23T01:01:34.261936808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:01:34.262271 kubelet[2778]: E0123 01:01:34.262151 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:34.262271 kubelet[2778]: E0123 01:01:34.262225 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:34.262622 kubelet[2778]: E0123 01:01:34.262436 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7nq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6464b5f569-l7jwf_calico-apiserver(9a36a138-e7e1-4031-9559-4a28820b727d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:34.264180 kubelet[2778]: E0123 01:01:34.264086 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:01:34.732536 containerd[1610]: time="2026-01-23T01:01:34.732486055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56555c56bc-bq2nx,Uid:5d7dff36-d3fb-4b97-8324-ac6e2554491c,Namespace:calico-system,Attempt:0,}" Jan 23 01:01:34.830184 systemd-networkd[1490]: calia89c9c80b82: Link UP Jan 23 01:01:34.832099 systemd-networkd[1490]: calia89c9c80b82: Gained carrier Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.764 [INFO][4759] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0 calico-kube-controllers-56555c56bc- calico-system 5d7dff36-d3fb-4b97-8324-ac6e2554491c 822 0 2026-01-23 01:01:09 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:56555c56bc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-2-n-ad7bb25bb9 calico-kube-controllers-56555c56bc-bq2nx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia89c9c80b82 [] [] }} ContainerID="43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" Namespace="calico-system" Pod="calico-kube-controllers-56555c56bc-bq2nx" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.764 [INFO][4759] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" Namespace="calico-system" Pod="calico-kube-controllers-56555c56bc-bq2nx" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.792 [INFO][4770] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" HandleID="k8s-pod-network.43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.792 [INFO][4770] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" HandleID="k8s-pod-network.43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-2-n-ad7bb25bb9", "pod":"calico-kube-controllers-56555c56bc-bq2nx", "timestamp":"2026-01-23 01:01:34.792696236 +0000 UTC"}, Hostname:"ci-4459-2-2-n-ad7bb25bb9", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.793 [INFO][4770] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.793 [INFO][4770] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.793 [INFO][4770] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-2-n-ad7bb25bb9' Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.800 [INFO][4770] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.804 [INFO][4770] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.808 [INFO][4770] ipam/ipam.go 511: Trying affinity for 192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.810 [INFO][4770] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.813 [INFO][4770] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.64/26 host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.813 [INFO][4770] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.64/26 handle="k8s-pod-network.43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.814 [INFO][4770] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54 Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.818 [INFO][4770] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.64/26 handle="k8s-pod-network.43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.824 [INFO][4770] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.72/26] block=192.168.28.64/26 handle="k8s-pod-network.43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.824 [INFO][4770] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.72/26] handle="k8s-pod-network.43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" host="ci-4459-2-2-n-ad7bb25bb9" Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.824 [INFO][4770] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 01:01:34.844522 containerd[1610]: 2026-01-23 01:01:34.824 [INFO][4770] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.72/26] IPv6=[] ContainerID="43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" HandleID="k8s-pod-network.43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" Workload="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0" Jan 23 01:01:34.845468 containerd[1610]: 2026-01-23 01:01:34.827 [INFO][4759] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" Namespace="calico-system" Pod="calico-kube-controllers-56555c56bc-bq2nx" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0", GenerateName:"calico-kube-controllers-56555c56bc-", Namespace:"calico-system", SelfLink:"", UID:"5d7dff36-d3fb-4b97-8324-ac6e2554491c", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56555c56bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"", Pod:"calico-kube-controllers-56555c56bc-bq2nx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia89c9c80b82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:34.845468 containerd[1610]: 2026-01-23 01:01:34.828 [INFO][4759] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.72/32] ContainerID="43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" Namespace="calico-system" Pod="calico-kube-controllers-56555c56bc-bq2nx" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0" Jan 23 01:01:34.845468 containerd[1610]: 2026-01-23 01:01:34.828 [INFO][4759] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia89c9c80b82 ContainerID="43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" Namespace="calico-system" Pod="calico-kube-controllers-56555c56bc-bq2nx" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0" Jan 23 01:01:34.845468 containerd[1610]: 2026-01-23 01:01:34.830 [INFO][4759] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" Namespace="calico-system" Pod="calico-kube-controllers-56555c56bc-bq2nx" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0" Jan 23 01:01:34.845468 containerd[1610]: 2026-01-23 01:01:34.831 [INFO][4759] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" Namespace="calico-system" Pod="calico-kube-controllers-56555c56bc-bq2nx" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0", GenerateName:"calico-kube-controllers-56555c56bc-", Namespace:"calico-system", SelfLink:"", UID:"5d7dff36-d3fb-4b97-8324-ac6e2554491c", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 1, 1, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"56555c56bc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-2-n-ad7bb25bb9", ContainerID:"43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54", Pod:"calico-kube-controllers-56555c56bc-bq2nx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia89c9c80b82", MAC:"1e:e3:3f:21:1c:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 01:01:34.845468 containerd[1610]: 2026-01-23 01:01:34.840 [INFO][4759] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" Namespace="calico-system" Pod="calico-kube-controllers-56555c56bc-bq2nx" WorkloadEndpoint="ci--4459--2--2--n--ad7bb25bb9-k8s-calico--kube--controllers--56555c56bc--bq2nx-eth0" Jan 23 01:01:34.872585 containerd[1610]: time="2026-01-23T01:01:34.872544402Z" level=info msg="connecting to shim 43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54" address="unix:///run/containerd/s/4fe12954f9aff51b81f5af72c7272ba820883f7e951178e6377d76a07eb99b70" namespace=k8s.io protocol=ttrpc version=3 Jan 23 01:01:34.900915 systemd[1]: Started cri-containerd-43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54.scope - libcontainer container 43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54. Jan 23 01:01:34.946149 containerd[1610]: time="2026-01-23T01:01:34.946108798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-56555c56bc-bq2nx,Uid:5d7dff36-d3fb-4b97-8324-ac6e2554491c,Namespace:calico-system,Attempt:0,} returns sandbox id \"43aa251e5968735d7cef451cfd2fda41b714a990c5d51d80fae740414dd70b54\"" Jan 23 01:01:34.947597 containerd[1610]: time="2026-01-23T01:01:34.947523925Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:01:34.963061 kubelet[2778]: E0123 01:01:34.962995 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:01:34.996417 kubelet[2778]: I0123 01:01:34.996192 2778 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dz8md" podStartSLOduration=40.996175067 podStartE2EDuration="40.996175067s" podCreationTimestamp="2026-01-23 01:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 01:01:34.984804838 +0000 UTC m=+47.401438214" watchObservedRunningTime="2026-01-23 01:01:34.996175067 +0000 UTC m=+47.412808443" Jan 23 01:01:35.128093 systemd-networkd[1490]: cali2cbd7b74cda: Gained IPv6LL Jan 23 01:01:35.370591 containerd[1610]: time="2026-01-23T01:01:35.370532876Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:35.371552 containerd[1610]: time="2026-01-23T01:01:35.371499283Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:01:35.371626 containerd[1610]: time="2026-01-23T01:01:35.371511973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:01:35.371889 kubelet[2778]: E0123 01:01:35.371839 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:01:35.371947 kubelet[2778]: E0123 01:01:35.371903 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:01:35.372360 kubelet[2778]: E0123 01:01:35.372167 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqllq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56555c56bc-bq2nx_calico-system(5d7dff36-d3fb-4b97-8324-ac6e2554491c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:35.373452 kubelet[2778]: E0123 01:01:35.373393 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:01:35.964862 kubelet[2778]: E0123 01:01:35.964243 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:01:36.471951 systemd-networkd[1490]: calia89c9c80b82: Gained IPv6LL Jan 23 01:01:44.734143 containerd[1610]: time="2026-01-23T01:01:44.733602060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:01:45.159348 containerd[1610]: time="2026-01-23T01:01:45.159294907Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:45.160724 containerd[1610]: time="2026-01-23T01:01:45.160657916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:01:45.160837 containerd[1610]: time="2026-01-23T01:01:45.160736536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:01:45.160987 kubelet[2778]: E0123 01:01:45.160917 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:01:45.160987 kubelet[2778]: E0123 01:01:45.160970 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:01:45.161494 kubelet[2778]: E0123 01:01:45.161105 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4e2be303283a4989a321aace85e4dbd6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m94jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb57f75c6-jldrr_calico-system(7df44e0e-f60c-4fe5-bce1-8acbf408ec27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:45.164588 containerd[1610]: time="2026-01-23T01:01:45.164106941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:01:45.598486 containerd[1610]: time="2026-01-23T01:01:45.598404182Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:45.600340 containerd[1610]: time="2026-01-23T01:01:45.600012861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:01:45.600340 containerd[1610]: time="2026-01-23T01:01:45.600182571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:01:45.601501 kubelet[2778]: E0123 01:01:45.600717 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:01:45.601501 kubelet[2778]: E0123 01:01:45.600801 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:01:45.601501 kubelet[2778]: E0123 01:01:45.600988 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m94jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb57f75c6-jldrr_calico-system(7df44e0e-f60c-4fe5-bce1-8acbf408ec27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:45.602782 kubelet[2778]: E0123 01:01:45.602670 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:01:45.734435 containerd[1610]: time="2026-01-23T01:01:45.733904010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:01:46.186026 containerd[1610]: time="2026-01-23T01:01:46.185962381Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:46.187430 containerd[1610]: time="2026-01-23T01:01:46.187388309Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:01:46.187502 containerd[1610]: time="2026-01-23T01:01:46.187483619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:01:46.187727 kubelet[2778]: E0123 01:01:46.187654 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:01:46.187727 kubelet[2778]: E0123 01:01:46.187705 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:01:46.188588 kubelet[2778]: E0123 01:01:46.188312 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2968,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc28p_calico-system(5259dbbd-3b68-4b5b-9de4-297f13034dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:46.188726 containerd[1610]: time="2026-01-23T01:01:46.187977458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:01:46.626686 containerd[1610]: time="2026-01-23T01:01:46.626617658Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:46.629207 containerd[1610]: time="2026-01-23T01:01:46.629057765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:01:46.629207 containerd[1610]: time="2026-01-23T01:01:46.629135855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:01:46.629961 kubelet[2778]: E0123 01:01:46.629621 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:01:46.629961 kubelet[2778]: E0123 01:01:46.629688 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:01:46.630337 kubelet[2778]: E0123 01:01:46.629997 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcfrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8rskh_calico-system(9345d4db-ab4d-4bde-b8b4-e031305112f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:46.632114 containerd[1610]: time="2026-01-23T01:01:46.632012752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:01:46.632916 kubelet[2778]: E0123 01:01:46.632432 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:01:47.082646 containerd[1610]: time="2026-01-23T01:01:47.082531663Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:47.085406 containerd[1610]: time="2026-01-23T01:01:47.085286530Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:01:47.085567 containerd[1610]: time="2026-01-23T01:01:47.085418989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:01:47.085814 kubelet[2778]: E0123 01:01:47.085654 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:01:47.085814 kubelet[2778]: E0123 01:01:47.085789 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:01:47.086852 kubelet[2778]: E0123 01:01:47.086194 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2968,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc28p_calico-system(5259dbbd-3b68-4b5b-9de4-297f13034dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:47.087114 containerd[1610]: time="2026-01-23T01:01:47.086439179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:01:47.087579 kubelet[2778]: E0123 01:01:47.087493 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:01:47.539692 containerd[1610]: time="2026-01-23T01:01:47.539582524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:47.541322 containerd[1610]: time="2026-01-23T01:01:47.540829383Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:01:47.541322 containerd[1610]: time="2026-01-23T01:01:47.540913832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:01:47.541473 kubelet[2778]: E0123 01:01:47.541400 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:47.541473 kubelet[2778]: E0123 01:01:47.541441 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:47.542109 kubelet[2778]: E0123 01:01:47.541607 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7nq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6464b5f569-l7jwf_calico-apiserver(9a36a138-e7e1-4031-9559-4a28820b727d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:47.542240 containerd[1610]: time="2026-01-23T01:01:47.542118151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:01:47.543408 kubelet[2778]: E0123 01:01:47.543341 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:01:47.969021 containerd[1610]: time="2026-01-23T01:01:47.968814785Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:47.971674 containerd[1610]: time="2026-01-23T01:01:47.971613462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:01:47.972061 containerd[1610]: time="2026-01-23T01:01:47.971843841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:01:47.972368 kubelet[2778]: E0123 01:01:47.972250 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:01:47.972368 kubelet[2778]: E0123 01:01:47.972333 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:01:47.972537 kubelet[2778]: E0123 01:01:47.972443 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqllq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56555c56bc-bq2nx_calico-system(5d7dff36-d3fb-4b97-8324-ac6e2554491c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:47.974074 kubelet[2778]: E0123 01:01:47.973533 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:01:48.734712 containerd[1610]: time="2026-01-23T01:01:48.734644775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:01:49.164672 containerd[1610]: time="2026-01-23T01:01:49.164615694Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:01:49.165887 containerd[1610]: time="2026-01-23T01:01:49.165827393Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:01:49.166027 containerd[1610]: time="2026-01-23T01:01:49.165916213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:01:49.166298 kubelet[2778]: E0123 01:01:49.166232 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:49.166298 kubelet[2778]: E0123 01:01:49.166298 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:01:49.167225 kubelet[2778]: E0123 01:01:49.167118 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dq6hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6464b5f569-5d4c5_calico-apiserver(a59d5fb0-f7af-4d9c-9241-42ed48cacc40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:01:49.168448 kubelet[2778]: E0123 01:01:49.168403 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:01:57.740821 kubelet[2778]: E0123 01:01:57.739538 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:01:58.736637 kubelet[2778]: E0123 01:01:58.736530 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:02:00.733881 kubelet[2778]: E0123 01:02:00.733627 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:02:01.736094 kubelet[2778]: E0123 01:02:01.736032 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:02:02.734646 kubelet[2778]: E0123 01:02:02.734444 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:02:03.737617 kubelet[2778]: E0123 01:02:03.737416 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:02:09.736791 containerd[1610]: time="2026-01-23T01:02:09.736515186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:02:10.174459 containerd[1610]: time="2026-01-23T01:02:10.174293593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:10.177580 containerd[1610]: time="2026-01-23T01:02:10.177493872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:02:10.177717 containerd[1610]: time="2026-01-23T01:02:10.177672101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:02:10.178249 kubelet[2778]: E0123 01:02:10.178164 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:02:10.178829 kubelet[2778]: E0123 01:02:10.178263 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:02:10.178829 kubelet[2778]: E0123 01:02:10.178624 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4e2be303283a4989a321aace85e4dbd6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m94jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb57f75c6-jldrr_calico-system(7df44e0e-f60c-4fe5-bce1-8acbf408ec27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:10.183880 containerd[1610]: time="2026-01-23T01:02:10.183835581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:02:10.617236 containerd[1610]: time="2026-01-23T01:02:10.617051423Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:10.618838 containerd[1610]: time="2026-01-23T01:02:10.618700694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:02:10.619094 containerd[1610]: time="2026-01-23T01:02:10.618767934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:02:10.619509 kubelet[2778]: E0123 01:02:10.619383 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:02:10.619781 kubelet[2778]: E0123 01:02:10.619648 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:02:10.620006 kubelet[2778]: E0123 01:02:10.619948 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m94jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb57f75c6-jldrr_calico-system(7df44e0e-f60c-4fe5-bce1-8acbf408ec27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:10.621716 kubelet[2778]: E0123 01:02:10.621663 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:02:10.735805 containerd[1610]: time="2026-01-23T01:02:10.735465947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:02:11.188404 containerd[1610]: time="2026-01-23T01:02:11.188271921Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:11.189580 containerd[1610]: time="2026-01-23T01:02:11.189469650Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:02:11.189580 containerd[1610]: time="2026-01-23T01:02:11.189563560Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:02:11.189821 kubelet[2778]: E0123 01:02:11.189796 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:02:11.191152 kubelet[2778]: E0123 01:02:11.190788 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:02:11.191152 kubelet[2778]: E0123 01:02:11.190898 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2968,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc28p_calico-system(5259dbbd-3b68-4b5b-9de4-297f13034dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:11.193763 containerd[1610]: time="2026-01-23T01:02:11.193131590Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:02:11.629402 containerd[1610]: time="2026-01-23T01:02:11.629308623Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:11.630928 containerd[1610]: time="2026-01-23T01:02:11.630839443Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:02:11.631064 containerd[1610]: time="2026-01-23T01:02:11.630944822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:02:11.631303 kubelet[2778]: E0123 01:02:11.631220 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:02:11.631395 kubelet[2778]: E0123 01:02:11.631311 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:02:11.631721 kubelet[2778]: E0123 01:02:11.631456 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2968,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc28p_calico-system(5259dbbd-3b68-4b5b-9de4-297f13034dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:11.633215 kubelet[2778]: E0123 01:02:11.633128 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:02:12.734600 containerd[1610]: time="2026-01-23T01:02:12.734549489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:02:13.172236 containerd[1610]: time="2026-01-23T01:02:13.172063326Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:13.175288 containerd[1610]: time="2026-01-23T01:02:13.174853515Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:02:13.176858 containerd[1610]: time="2026-01-23T01:02:13.174951725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:02:13.177048 kubelet[2778]: E0123 01:02:13.176977 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:02:13.177680 kubelet[2778]: E0123 01:02:13.177062 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:02:13.177741 kubelet[2778]: E0123 01:02:13.177662 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcfrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8rskh_calico-system(9345d4db-ab4d-4bde-b8b4-e031305112f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:13.178914 kubelet[2778]: E0123 01:02:13.178861 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:02:13.736827 containerd[1610]: time="2026-01-23T01:02:13.736692438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:02:14.162696 containerd[1610]: time="2026-01-23T01:02:14.160793996Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:14.162696 containerd[1610]: time="2026-01-23T01:02:14.161894406Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:02:14.162696 containerd[1610]: time="2026-01-23T01:02:14.161954706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:02:14.162908 kubelet[2778]: E0123 01:02:14.162074 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:14.162908 kubelet[2778]: E0123 01:02:14.162144 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:14.162908 kubelet[2778]: E0123 01:02:14.162241 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dq6hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6464b5f569-5d4c5_calico-apiserver(a59d5fb0-f7af-4d9c-9241-42ed48cacc40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:14.163917 kubelet[2778]: E0123 01:02:14.163881 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:02:14.736558 containerd[1610]: time="2026-01-23T01:02:14.735275789Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:02:15.176522 containerd[1610]: time="2026-01-23T01:02:15.176444024Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:15.178220 containerd[1610]: time="2026-01-23T01:02:15.178102933Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:02:15.178974 containerd[1610]: time="2026-01-23T01:02:15.178156823Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:02:15.179226 kubelet[2778]: E0123 01:02:15.179105 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:15.179226 kubelet[2778]: E0123 01:02:15.179157 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:15.180471 kubelet[2778]: E0123 01:02:15.179316 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7nq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6464b5f569-l7jwf_calico-apiserver(9a36a138-e7e1-4031-9559-4a28820b727d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:15.182227 kubelet[2778]: E0123 01:02:15.182170 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:02:17.734318 containerd[1610]: time="2026-01-23T01:02:17.734063122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:02:18.171495 containerd[1610]: time="2026-01-23T01:02:18.171437433Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:18.172902 containerd[1610]: time="2026-01-23T01:02:18.172809503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:02:18.172902 containerd[1610]: time="2026-01-23T01:02:18.172850613Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:02:18.173108 kubelet[2778]: E0123 01:02:18.173033 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:02:18.173108 kubelet[2778]: E0123 01:02:18.173086 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:02:18.174223 kubelet[2778]: E0123 01:02:18.173216 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqllq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56555c56bc-bq2nx_calico-system(5d7dff36-d3fb-4b97-8324-ac6e2554491c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:18.174640 kubelet[2778]: E0123 01:02:18.174589 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:02:24.738401 kubelet[2778]: E0123 01:02:24.738326 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:02:24.739207 kubelet[2778]: E0123 01:02:24.738530 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:02:26.733264 kubelet[2778]: E0123 01:02:26.733009 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:02:27.735223 kubelet[2778]: E0123 01:02:27.735119 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:02:29.739212 kubelet[2778]: E0123 01:02:29.737062 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:02:30.734600 kubelet[2778]: E0123 01:02:30.734133 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:02:35.736707 kubelet[2778]: E0123 01:02:35.736623 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:02:37.745602 kubelet[2778]: E0123 01:02:37.745524 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:02:38.557096 systemd[1]: Started sshd@7-89.167.6.201:22-4.153.228.146:57690.service - OpenSSH per-connection server daemon (4.153.228.146:57690). Jan 23 01:02:39.356464 sshd[4933]: Accepted publickey for core from 4.153.228.146 port 57690 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:02:39.360042 sshd-session[4933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:39.371826 systemd-logind[1580]: New session 8 of user core. Jan 23 01:02:39.380963 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 01:02:39.736168 kubelet[2778]: E0123 01:02:39.734889 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:02:40.027791 sshd[4936]: Connection closed by 4.153.228.146 port 57690 Jan 23 01:02:40.028986 sshd-session[4933]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:40.036265 systemd-logind[1580]: Session 8 logged out. Waiting for processes to exit. Jan 23 01:02:40.037001 systemd[1]: sshd@7-89.167.6.201:22-4.153.228.146:57690.service: Deactivated successfully. Jan 23 01:02:40.043497 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 01:02:40.048706 systemd-logind[1580]: Removed session 8. Jan 23 01:02:41.734310 kubelet[2778]: E0123 01:02:41.733689 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:02:41.737800 kubelet[2778]: E0123 01:02:41.737412 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:02:42.733261 kubelet[2778]: E0123 01:02:42.733217 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:02:45.165040 systemd[1]: Started sshd@8-89.167.6.201:22-4.153.228.146:40904.service - OpenSSH per-connection server daemon (4.153.228.146:40904). Jan 23 01:02:45.950614 sshd[4950]: Accepted publickey for core from 4.153.228.146 port 40904 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:02:45.953358 sshd-session[4950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:45.957363 systemd-logind[1580]: New session 9 of user core. Jan 23 01:02:45.974898 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 01:02:46.575203 sshd[4953]: Connection closed by 4.153.228.146 port 40904 Jan 23 01:02:46.575844 sshd-session[4950]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:46.580814 systemd-logind[1580]: Session 9 logged out. Waiting for processes to exit. Jan 23 01:02:46.581659 systemd[1]: sshd@8-89.167.6.201:22-4.153.228.146:40904.service: Deactivated successfully. Jan 23 01:02:46.584324 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 01:02:46.586436 systemd-logind[1580]: Removed session 9. Jan 23 01:02:47.735433 kubelet[2778]: E0123 01:02:47.735296 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:02:48.739259 kubelet[2778]: E0123 01:02:48.737998 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:02:51.713196 systemd[1]: Started sshd@9-89.167.6.201:22-4.153.228.146:40914.service - OpenSSH per-connection server daemon (4.153.228.146:40914). Jan 23 01:02:52.511374 sshd[4974]: Accepted publickey for core from 4.153.228.146 port 40914 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:02:52.513917 sshd-session[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:52.523781 systemd-logind[1580]: New session 10 of user core. Jan 23 01:02:52.533031 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 01:02:52.733854 kubelet[2778]: E0123 01:02:52.733814 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:02:53.178962 sshd[4981]: Connection closed by 4.153.228.146 port 40914 Jan 23 01:02:53.182008 sshd-session[4974]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:53.192464 systemd-logind[1580]: Session 10 logged out. Waiting for processes to exit. Jan 23 01:02:53.192534 systemd[1]: sshd@9-89.167.6.201:22-4.153.228.146:40914.service: Deactivated successfully. Jan 23 01:02:53.199341 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 01:02:53.205243 systemd-logind[1580]: Removed session 10. Jan 23 01:02:53.317594 systemd[1]: Started sshd@10-89.167.6.201:22-4.153.228.146:40924.service - OpenSSH per-connection server daemon (4.153.228.146:40924). Jan 23 01:02:54.117548 sshd[4994]: Accepted publickey for core from 4.153.228.146 port 40924 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:02:54.122046 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:54.134703 systemd-logind[1580]: New session 11 of user core. Jan 23 01:02:54.141984 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 01:02:54.737437 containerd[1610]: time="2026-01-23T01:02:54.737041973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 01:02:54.741726 sshd[4997]: Connection closed by 4.153.228.146 port 40924 Jan 23 01:02:54.742083 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:54.751420 systemd[1]: sshd@10-89.167.6.201:22-4.153.228.146:40924.service: Deactivated successfully. Jan 23 01:02:54.752069 systemd-logind[1580]: Session 11 logged out. Waiting for processes to exit. Jan 23 01:02:54.758471 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 01:02:54.767031 systemd-logind[1580]: Removed session 11. Jan 23 01:02:54.887067 systemd[1]: Started sshd@11-89.167.6.201:22-4.153.228.146:41934.service - OpenSSH per-connection server daemon (4.153.228.146:41934). Jan 23 01:02:55.168471 containerd[1610]: time="2026-01-23T01:02:55.168345320Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:55.170709 containerd[1610]: time="2026-01-23T01:02:55.170449770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 01:02:55.170709 containerd[1610]: time="2026-01-23T01:02:55.170555671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 01:02:55.171019 kubelet[2778]: E0123 01:02:55.170980 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:02:55.172001 kubelet[2778]: E0123 01:02:55.171923 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 01:02:55.172250 kubelet[2778]: E0123 01:02:55.172103 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qcfrx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-8rskh_calico-system(9345d4db-ab4d-4bde-b8b4-e031305112f9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:55.173958 kubelet[2778]: E0123 01:02:55.173701 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:02:55.680320 sshd[5011]: Accepted publickey for core from 4.153.228.146 port 41934 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:02:55.682026 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:02:55.686631 systemd-logind[1580]: New session 12 of user core. Jan 23 01:02:55.691992 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 01:02:55.736616 containerd[1610]: time="2026-01-23T01:02:55.736556715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:02:56.180413 containerd[1610]: time="2026-01-23T01:02:56.179762298Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:56.184331 containerd[1610]: time="2026-01-23T01:02:56.184206610Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:02:56.184504 kubelet[2778]: E0123 01:02:56.184434 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:56.184504 kubelet[2778]: E0123 01:02:56.184485 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:56.185910 kubelet[2778]: E0123 01:02:56.184701 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r7nq7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6464b5f569-l7jwf_calico-apiserver(9a36a138-e7e1-4031-9559-4a28820b727d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:56.186476 containerd[1610]: time="2026-01-23T01:02:56.184292800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:02:56.186476 containerd[1610]: time="2026-01-23T01:02:56.186058061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 01:02:56.189384 kubelet[2778]: E0123 01:02:56.189331 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:02:56.341037 sshd[5014]: Connection closed by 4.153.228.146 port 41934 Jan 23 01:02:56.342013 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Jan 23 01:02:56.354702 systemd[1]: sshd@11-89.167.6.201:22-4.153.228.146:41934.service: Deactivated successfully. Jan 23 01:02:56.354970 systemd-logind[1580]: Session 12 logged out. Waiting for processes to exit. Jan 23 01:02:56.360535 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 01:02:56.366586 systemd-logind[1580]: Removed session 12. Jan 23 01:02:56.628309 containerd[1610]: time="2026-01-23T01:02:56.628173565Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:02:56.629426 containerd[1610]: time="2026-01-23T01:02:56.629356256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 01:02:56.629627 containerd[1610]: time="2026-01-23T01:02:56.629488806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 01:02:56.629714 kubelet[2778]: E0123 01:02:56.629659 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:56.629714 kubelet[2778]: E0123 01:02:56.629713 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 01:02:56.629949 kubelet[2778]: E0123 01:02:56.629858 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dq6hb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6464b5f569-5d4c5_calico-apiserver(a59d5fb0-f7af-4d9c-9241-42ed48cacc40): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 01:02:56.631369 kubelet[2778]: E0123 01:02:56.631106 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:03:00.733555 containerd[1610]: time="2026-01-23T01:03:00.732872165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 01:03:01.366319 containerd[1610]: time="2026-01-23T01:03:01.366160431Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:03:01.367836 containerd[1610]: time="2026-01-23T01:03:01.367723261Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 01:03:01.368819 containerd[1610]: time="2026-01-23T01:03:01.367938371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 01:03:01.369046 kubelet[2778]: E0123 01:03:01.368292 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:03:01.369046 kubelet[2778]: E0123 01:03:01.368391 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 01:03:01.370379 kubelet[2778]: E0123 01:03:01.369883 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:4e2be303283a4989a321aace85e4dbd6,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m94jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb57f75c6-jldrr_calico-system(7df44e0e-f60c-4fe5-bce1-8acbf408ec27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 01:03:01.371540 containerd[1610]: time="2026-01-23T01:03:01.371374193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 01:03:01.469953 systemd[1]: Started sshd@12-89.167.6.201:22-4.153.228.146:41938.service - OpenSSH per-connection server daemon (4.153.228.146:41938). Jan 23 01:03:01.812220 containerd[1610]: time="2026-01-23T01:03:01.812152888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:03:01.813795 containerd[1610]: time="2026-01-23T01:03:01.813430918Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 01:03:01.813795 containerd[1610]: time="2026-01-23T01:03:01.813507539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 01:03:01.813958 kubelet[2778]: E0123 01:03:01.813673 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:03:01.813958 kubelet[2778]: E0123 01:03:01.813720 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 01:03:01.813958 kubelet[2778]: E0123 01:03:01.813886 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2968,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc28p_calico-system(5259dbbd-3b68-4b5b-9de4-297f13034dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 01:03:01.814507 containerd[1610]: time="2026-01-23T01:03:01.814438218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 01:03:02.245358 sshd[5056]: Accepted publickey for core from 4.153.228.146 port 41938 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:03:02.247874 sshd-session[5056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:02.256708 systemd-logind[1580]: New session 13 of user core. Jan 23 01:03:02.260236 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 01:03:02.265985 containerd[1610]: time="2026-01-23T01:03:02.265499889Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:03:02.267240 containerd[1610]: time="2026-01-23T01:03:02.267206670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 01:03:02.267352 containerd[1610]: time="2026-01-23T01:03:02.267341060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 01:03:02.267540 kubelet[2778]: E0123 01:03:02.267512 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:03:02.267616 kubelet[2778]: E0123 01:03:02.267607 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 01:03:02.268014 kubelet[2778]: E0123 01:03:02.267851 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m94jx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7fb57f75c6-jldrr_calico-system(7df44e0e-f60c-4fe5-bce1-8acbf408ec27): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 01:03:02.268298 containerd[1610]: time="2026-01-23T01:03:02.268280050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 01:03:02.269834 kubelet[2778]: E0123 01:03:02.269810 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:03:02.701391 containerd[1610]: time="2026-01-23T01:03:02.701331414Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:03:02.702928 containerd[1610]: time="2026-01-23T01:03:02.702875754Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 01:03:02.703017 containerd[1610]: time="2026-01-23T01:03:02.702973005Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 01:03:02.703634 kubelet[2778]: E0123 01:03:02.703569 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:03:02.704255 kubelet[2778]: E0123 01:03:02.703642 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 01:03:02.704255 kubelet[2778]: E0123 01:03:02.704166 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h2968,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc28p_calico-system(5259dbbd-3b68-4b5b-9de4-297f13034dcd): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 01:03:02.705496 kubelet[2778]: E0123 01:03:02.705455 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:03:02.865939 sshd[5059]: Connection closed by 4.153.228.146 port 41938 Jan 23 01:03:02.869468 sshd-session[5056]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:02.878154 systemd-logind[1580]: Session 13 logged out. Waiting for processes to exit. Jan 23 01:03:02.878798 systemd[1]: sshd@12-89.167.6.201:22-4.153.228.146:41938.service: Deactivated successfully. Jan 23 01:03:02.884491 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 01:03:02.890443 systemd-logind[1580]: Removed session 13. Jan 23 01:03:03.007181 systemd[1]: Started sshd@13-89.167.6.201:22-4.153.228.146:41954.service - OpenSSH per-connection server daemon (4.153.228.146:41954). Jan 23 01:03:03.798603 sshd[5071]: Accepted publickey for core from 4.153.228.146 port 41954 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:03:03.801956 sshd-session[5071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:03.822107 systemd-logind[1580]: New session 14 of user core. Jan 23 01:03:03.832203 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 01:03:04.618531 sshd[5074]: Connection closed by 4.153.228.146 port 41954 Jan 23 01:03:04.620260 sshd-session[5071]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:04.624982 systemd-logind[1580]: Session 14 logged out. Waiting for processes to exit. Jan 23 01:03:04.625054 systemd[1]: sshd@13-89.167.6.201:22-4.153.228.146:41954.service: Deactivated successfully. Jan 23 01:03:04.627043 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 01:03:04.628861 systemd-logind[1580]: Removed session 14. Jan 23 01:03:04.754986 systemd[1]: Started sshd@14-89.167.6.201:22-4.153.228.146:41416.service - OpenSSH per-connection server daemon (4.153.228.146:41416). Jan 23 01:03:05.530905 sshd[5084]: Accepted publickey for core from 4.153.228.146 port 41416 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:03:05.533734 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:05.543116 systemd-logind[1580]: New session 15 of user core. Jan 23 01:03:05.548222 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 01:03:06.734373 containerd[1610]: time="2026-01-23T01:03:06.734330262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 01:03:06.734870 kubelet[2778]: E0123 01:03:06.734673 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:03:06.907807 sshd[5087]: Connection closed by 4.153.228.146 port 41416 Jan 23 01:03:06.909641 sshd-session[5084]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:06.913706 systemd-logind[1580]: Session 15 logged out. Waiting for processes to exit. Jan 23 01:03:06.914558 systemd[1]: sshd@14-89.167.6.201:22-4.153.228.146:41416.service: Deactivated successfully. Jan 23 01:03:06.917721 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 01:03:06.920281 systemd-logind[1580]: Removed session 15. Jan 23 01:03:07.041071 systemd[1]: Started sshd@15-89.167.6.201:22-4.153.228.146:41430.service - OpenSSH per-connection server daemon (4.153.228.146:41430). Jan 23 01:03:07.155690 containerd[1610]: time="2026-01-23T01:03:07.155631789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 01:03:07.156607 containerd[1610]: time="2026-01-23T01:03:07.156576450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 01:03:07.156667 containerd[1610]: time="2026-01-23T01:03:07.156645570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 01:03:07.156847 kubelet[2778]: E0123 01:03:07.156795 2778 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:03:07.156892 kubelet[2778]: E0123 01:03:07.156849 2778 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 01:03:07.156985 kubelet[2778]: E0123 01:03:07.156948 2778 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqllq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-56555c56bc-bq2nx_calico-system(5d7dff36-d3fb-4b97-8324-ac6e2554491c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 01:03:07.158507 kubelet[2778]: E0123 01:03:07.158395 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:03:07.734991 kubelet[2778]: E0123 01:03:07.734830 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:03:07.806859 sshd[5119]: Accepted publickey for core from 4.153.228.146 port 41430 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:03:07.808726 sshd-session[5119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:07.822269 systemd-logind[1580]: New session 16 of user core. Jan 23 01:03:07.829947 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 01:03:08.477456 sshd[5122]: Connection closed by 4.153.228.146 port 41430 Jan 23 01:03:08.478942 sshd-session[5119]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:08.483952 systemd[1]: sshd@15-89.167.6.201:22-4.153.228.146:41430.service: Deactivated successfully. Jan 23 01:03:08.486969 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 01:03:08.489693 systemd-logind[1580]: Session 16 logged out. Waiting for processes to exit. Jan 23 01:03:08.491564 systemd-logind[1580]: Removed session 16. Jan 23 01:03:08.609138 systemd[1]: Started sshd@16-89.167.6.201:22-4.153.228.146:41446.service - OpenSSH per-connection server daemon (4.153.228.146:41446). Jan 23 01:03:08.734129 kubelet[2778]: E0123 01:03:08.733684 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:03:09.379080 sshd[5131]: Accepted publickey for core from 4.153.228.146 port 41446 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:03:09.381540 sshd-session[5131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:09.392501 systemd-logind[1580]: New session 17 of user core. Jan 23 01:03:09.398123 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 01:03:09.972737 sshd[5134]: Connection closed by 4.153.228.146 port 41446 Jan 23 01:03:09.973914 sshd-session[5131]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:09.979323 systemd[1]: sshd@16-89.167.6.201:22-4.153.228.146:41446.service: Deactivated successfully. Jan 23 01:03:09.979979 systemd-logind[1580]: Session 17 logged out. Waiting for processes to exit. Jan 23 01:03:09.985062 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 01:03:09.989333 systemd-logind[1580]: Removed session 17. Jan 23 01:03:14.734719 kubelet[2778]: E0123 01:03:14.734668 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:03:15.106918 systemd[1]: Started sshd@17-89.167.6.201:22-4.153.228.146:56152.service - OpenSSH per-connection server daemon (4.153.228.146:56152). Jan 23 01:03:15.885305 sshd[5147]: Accepted publickey for core from 4.153.228.146 port 56152 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:03:15.887695 sshd-session[5147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:15.899729 systemd-logind[1580]: New session 18 of user core. Jan 23 01:03:15.907033 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 01:03:16.479405 sshd[5150]: Connection closed by 4.153.228.146 port 56152 Jan 23 01:03:16.480703 sshd-session[5147]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:16.489317 systemd-logind[1580]: Session 18 logged out. Waiting for processes to exit. Jan 23 01:03:16.490564 systemd[1]: sshd@17-89.167.6.201:22-4.153.228.146:56152.service: Deactivated successfully. Jan 23 01:03:16.497013 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 01:03:16.502034 systemd-logind[1580]: Removed session 18. Jan 23 01:03:17.734598 kubelet[2778]: E0123 01:03:17.734337 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:03:17.734598 kubelet[2778]: E0123 01:03:17.734412 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:03:18.734642 kubelet[2778]: E0123 01:03:18.734547 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:03:19.733487 kubelet[2778]: E0123 01:03:19.733216 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:03:20.736133 kubelet[2778]: E0123 01:03:20.735718 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:03:21.614539 systemd[1]: Started sshd@18-89.167.6.201:22-4.153.228.146:56168.service - OpenSSH per-connection server daemon (4.153.228.146:56168). Jan 23 01:03:22.384680 sshd[5161]: Accepted publickey for core from 4.153.228.146 port 56168 ssh2: RSA SHA256:4pclTtHXNTDkTdkFserBgzRl8OUewnaIN45KXNU4z08 Jan 23 01:03:22.387455 sshd-session[5161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 01:03:22.399576 systemd-logind[1580]: New session 19 of user core. Jan 23 01:03:22.405027 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 01:03:22.971881 sshd[5164]: Connection closed by 4.153.228.146 port 56168 Jan 23 01:03:22.973868 sshd-session[5161]: pam_unix(sshd:session): session closed for user core Jan 23 01:03:22.978889 systemd[1]: sshd@18-89.167.6.201:22-4.153.228.146:56168.service: Deactivated successfully. Jan 23 01:03:22.981640 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 01:03:22.986154 systemd-logind[1580]: Session 19 logged out. Waiting for processes to exit. Jan 23 01:03:22.987224 systemd-logind[1580]: Removed session 19. Jan 23 01:03:25.738533 kubelet[2778]: E0123 01:03:25.738465 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:03:28.732270 kubelet[2778]: E0123 01:03:28.732214 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:03:32.735168 kubelet[2778]: E0123 01:03:32.735086 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7fb57f75c6-jldrr" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" Jan 23 01:03:33.734869 kubelet[2778]: E0123 01:03:33.734737 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d" Jan 23 01:03:34.732623 kubelet[2778]: E0123 01:03:34.732579 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-5d4c5" podUID="a59d5fb0-f7af-4d9c-9241-42ed48cacc40" Jan 23 01:03:34.733163 kubelet[2778]: E0123 01:03:34.733066 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-8rskh" podUID="9345d4db-ab4d-4bde-b8b4-e031305112f9" Jan 23 01:03:39.736630 kubelet[2778]: E0123 01:03:39.736334 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc28p" podUID="5259dbbd-3b68-4b5b-9de4-297f13034dcd" Jan 23 01:03:39.807698 kubelet[2778]: E0123 01:03:39.807645 2778 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48886->10.0.0.2:2379: read: connection timed out" Jan 23 01:03:39.818668 systemd[1]: cri-containerd-f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003.scope: Deactivated successfully. Jan 23 01:03:39.819246 systemd[1]: cri-containerd-f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003.scope: Consumed 2.293s CPU time, 20.9M memory peak, 64K read from disk. Jan 23 01:03:39.826478 containerd[1610]: time="2026-01-23T01:03:39.826319688Z" level=info msg="received container exit event container_id:\"f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003\" id:\"f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003\" pid:2632 exit_status:1 exited_at:{seconds:1769130219 nanos:825561948}" Jan 23 01:03:39.882096 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003-rootfs.mount: Deactivated successfully. Jan 23 01:03:40.258094 systemd[1]: cri-containerd-154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a.scope: Deactivated successfully. Jan 23 01:03:40.258695 systemd[1]: cri-containerd-154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a.scope: Consumed 19.521s CPU time, 117M memory peak. Jan 23 01:03:40.266418 containerd[1610]: time="2026-01-23T01:03:40.266277649Z" level=info msg="received container exit event container_id:\"154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a\" id:\"154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a\" pid:3128 exit_status:1 exited_at:{seconds:1769130220 nanos:265129764}" Jan 23 01:03:40.303453 kubelet[2778]: I0123 01:03:40.302639 2778 scope.go:117] "RemoveContainer" containerID="f01c529b2357ea835d92f6ede8f476f6ad5932b5f600969106373f25d3aaf003" Jan 23 01:03:40.308908 containerd[1610]: time="2026-01-23T01:03:40.308846779Z" level=info msg="CreateContainer within sandbox \"859a7f091858d387e7ec3a828183114f724aa23633d98067d4f9b5dd22bd7315\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 01:03:40.318701 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a-rootfs.mount: Deactivated successfully. Jan 23 01:03:40.339881 containerd[1610]: time="2026-01-23T01:03:40.337620964Z" level=info msg="Container 608e6e16aba76a52d08da84ab8e935c26ac39ff1f552818260ec3c86b36d34b3: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:40.353619 containerd[1610]: time="2026-01-23T01:03:40.353564521Z" level=info msg="CreateContainer within sandbox \"859a7f091858d387e7ec3a828183114f724aa23633d98067d4f9b5dd22bd7315\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"608e6e16aba76a52d08da84ab8e935c26ac39ff1f552818260ec3c86b36d34b3\"" Jan 23 01:03:40.354612 containerd[1610]: time="2026-01-23T01:03:40.354579369Z" level=info msg="StartContainer for \"608e6e16aba76a52d08da84ab8e935c26ac39ff1f552818260ec3c86b36d34b3\"" Jan 23 01:03:40.356517 containerd[1610]: time="2026-01-23T01:03:40.356444804Z" level=info msg="connecting to shim 608e6e16aba76a52d08da84ab8e935c26ac39ff1f552818260ec3c86b36d34b3" address="unix:///run/containerd/s/768f2145c617b0123eaf6b9925916cfec6200cc38a44001b18ae6914dd44fbb2" protocol=ttrpc version=3 Jan 23 01:03:40.396038 systemd[1]: Started cri-containerd-608e6e16aba76a52d08da84ab8e935c26ac39ff1f552818260ec3c86b36d34b3.scope - libcontainer container 608e6e16aba76a52d08da84ab8e935c26ac39ff1f552818260ec3c86b36d34b3. Jan 23 01:03:40.497782 containerd[1610]: time="2026-01-23T01:03:40.497630452Z" level=info msg="StartContainer for \"608e6e16aba76a52d08da84ab8e935c26ac39ff1f552818260ec3c86b36d34b3\" returns successfully" Jan 23 01:03:40.911279 systemd[1]: cri-containerd-d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729.scope: Deactivated successfully. Jan 23 01:03:40.911611 systemd[1]: cri-containerd-d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729.scope: Consumed 2.999s CPU time, 59M memory peak, 64K read from disk. Jan 23 01:03:40.914150 containerd[1610]: time="2026-01-23T01:03:40.914084015Z" level=info msg="received container exit event container_id:\"d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729\" id:\"d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729\" pid:2611 exit_status:1 exited_at:{seconds:1769130220 nanos:913572621}" Jan 23 01:03:40.943543 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729-rootfs.mount: Deactivated successfully. Jan 23 01:03:41.308364 kubelet[2778]: I0123 01:03:41.308068 2778 scope.go:117] "RemoveContainer" containerID="d580c6b59bf871095cbaa235cf4694f18677d78cc4ff18d3b1af7a4c51acd729" Jan 23 01:03:41.311939 kubelet[2778]: I0123 01:03:41.311680 2778 scope.go:117] "RemoveContainer" containerID="154e0beb3b7c768a21543a64e14412c0429f596d6ec7564d47925908a99b1d6a" Jan 23 01:03:41.312542 containerd[1610]: time="2026-01-23T01:03:41.312424955Z" level=info msg="CreateContainer within sandbox \"4407e0a495e9331d9c587fed6ff7b9af7a77ed692297976e3cf5e24c581adcc8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 01:03:41.316780 containerd[1610]: time="2026-01-23T01:03:41.316608762Z" level=info msg="CreateContainer within sandbox \"e5b4eae2340848501f907fd61fb0b6f1b0f93d66e4e7850b6feb4a9e763bafa9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 01:03:41.331004 containerd[1610]: time="2026-01-23T01:03:41.329930505Z" level=info msg="Container 26193745301b85d04d8183ac965555e1ffb5614c093da141ea134f6adf20504e: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:41.333978 containerd[1610]: time="2026-01-23T01:03:41.333946035Z" level=info msg="Container d63c428e388815ad204ff64bca30bc0b5a56abaf5399e75c77cf23bd4618be43: CDI devices from CRI Config.CDIDevices: []" Jan 23 01:03:41.342962 containerd[1610]: time="2026-01-23T01:03:41.342914263Z" level=info msg="CreateContainer within sandbox \"4407e0a495e9331d9c587fed6ff7b9af7a77ed692297976e3cf5e24c581adcc8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"26193745301b85d04d8183ac965555e1ffb5614c093da141ea134f6adf20504e\"" Jan 23 01:03:41.345634 containerd[1610]: time="2026-01-23T01:03:41.345567169Z" level=info msg="StartContainer for \"26193745301b85d04d8183ac965555e1ffb5614c093da141ea134f6adf20504e\"" Jan 23 01:03:41.347570 containerd[1610]: time="2026-01-23T01:03:41.347533935Z" level=info msg="connecting to shim 26193745301b85d04d8183ac965555e1ffb5614c093da141ea134f6adf20504e" address="unix:///run/containerd/s/11aecad3be69b1835c7e854c187d6f2b6ae065c12737c5a502ce2c350caed745" protocol=ttrpc version=3 Jan 23 01:03:41.355087 containerd[1610]: time="2026-01-23T01:03:41.355049691Z" level=info msg="CreateContainer within sandbox \"e5b4eae2340848501f907fd61fb0b6f1b0f93d66e4e7850b6feb4a9e763bafa9\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"d63c428e388815ad204ff64bca30bc0b5a56abaf5399e75c77cf23bd4618be43\"" Jan 23 01:03:41.357557 containerd[1610]: time="2026-01-23T01:03:41.357466050Z" level=info msg="StartContainer for \"d63c428e388815ad204ff64bca30bc0b5a56abaf5399e75c77cf23bd4618be43\"" Jan 23 01:03:41.359991 containerd[1610]: time="2026-01-23T01:03:41.359925379Z" level=info msg="connecting to shim d63c428e388815ad204ff64bca30bc0b5a56abaf5399e75c77cf23bd4618be43" address="unix:///run/containerd/s/ca3cca27a5277796ff7d8f640d8820578e943025624b37449a7a9d9486e17af6" protocol=ttrpc version=3 Jan 23 01:03:41.383780 systemd[1]: Started cri-containerd-26193745301b85d04d8183ac965555e1ffb5614c093da141ea134f6adf20504e.scope - libcontainer container 26193745301b85d04d8183ac965555e1ffb5614c093da141ea134f6adf20504e. Jan 23 01:03:41.399981 systemd[1]: Started cri-containerd-d63c428e388815ad204ff64bca30bc0b5a56abaf5399e75c77cf23bd4618be43.scope - libcontainer container d63c428e388815ad204ff64bca30bc0b5a56abaf5399e75c77cf23bd4618be43. Jan 23 01:03:41.453454 containerd[1610]: time="2026-01-23T01:03:41.453400090Z" level=info msg="StartContainer for \"26193745301b85d04d8183ac965555e1ffb5614c093da141ea134f6adf20504e\" returns successfully" Jan 23 01:03:41.466541 containerd[1610]: time="2026-01-23T01:03:41.466510036Z" level=info msg="StartContainer for \"d63c428e388815ad204ff64bca30bc0b5a56abaf5399e75c77cf23bd4618be43\" returns successfully" Jan 23 01:03:41.734658 kubelet[2778]: E0123 01:03:41.734388 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-56555c56bc-bq2nx" podUID="5d7dff36-d3fb-4b97-8324-ac6e2554491c" Jan 23 01:03:43.195912 kubelet[2778]: E0123 01:03:43.195665 2778 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48736->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{whisker-7fb57f75c6-jldrr.188d3671c62bb467 calico-system 1566 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:whisker-7fb57f75c6-jldrr,UID:7df44e0e-f60c-4fe5-bce1-8acbf408ec27,APIVersion:v1,ResourceVersion:886,FieldPath:spec.containers{whisker},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/whisker:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4459-2-2-n-ad7bb25bb9,},FirstTimestamp:2026-01-23 01:01:30 +0000 UTC,LastTimestamp:2026-01-23 01:03:32.733349142 +0000 UTC m=+165.149982548,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-2-n-ad7bb25bb9,}" Jan 23 01:03:43.197950 kubelet[2778]: I0123 01:03:43.196365 2778 status_manager.go:895] "Failed to get status for pod" podUID="7df44e0e-f60c-4fe5-bce1-8acbf408ec27" pod="calico-system/whisker-7fb57f75c6-jldrr" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:48818->10.0.0.2:2379: read: connection timed out" Jan 23 01:03:44.732664 kubelet[2778]: E0123 01:03:44.732551 2778 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6464b5f569-l7jwf" podUID="9a36a138-e7e1-4031-9559-4a28820b727d"