Jan 23 18:56:21.975338 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 16:02:29 -00 2026 Jan 23 18:56:21.975368 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:56:21.975382 kernel: BIOS-provided physical RAM map: Jan 23 18:56:21.975389 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 23 18:56:21.975396 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 23 18:56:21.975404 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000044fdfff] usable Jan 23 18:56:21.975413 kernel: BIOS-e820: [mem 0x00000000044fe000-0x00000000048fdfff] reserved Jan 23 18:56:21.975420 kernel: BIOS-e820: [mem 0x00000000048fe000-0x000000003ff1efff] usable Jan 23 18:56:21.975427 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ffc8fff] reserved Jan 23 18:56:21.975436 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 23 18:56:21.975443 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 23 18:56:21.975450 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 23 18:56:21.975458 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 23 18:56:21.975465 kernel: printk: legacy bootconsole [earlyser0] enabled Jan 23 18:56:21.975474 kernel: NX (Execute Disable) protection: active Jan 23 18:56:21.975484 kernel: APIC: Static calls initialized Jan 23 18:56:21.975491 kernel: efi: EFI v2.7 by Microsoft Jan 23 18:56:21.975499 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3eaa3018 RNG=0x3ffd2018 Jan 23 18:56:21.975507 kernel: random: crng init done Jan 23 18:56:21.975515 kernel: secureboot: Secure boot disabled Jan 23 18:56:21.975523 kernel: SMBIOS 3.1.0 present. Jan 23 18:56:21.975530 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/25/2025 Jan 23 18:56:21.975538 kernel: DMI: Memory slots populated: 2/2 Jan 23 18:56:21.975545 kernel: Hypervisor detected: Microsoft Hyper-V Jan 23 18:56:21.975553 kernel: Hyper-V: privilege flags low 0xae7f, high 0x3b8030, hints 0x9e4e24, misc 0xe0bed7b2 Jan 23 18:56:21.975561 kernel: Hyper-V: Nested features: 0x3e0101 Jan 23 18:56:21.975570 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 23 18:56:21.975577 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 23 18:56:21.975585 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 18:56:21.975593 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 23 18:56:21.975600 kernel: tsc: Detected 2299.999 MHz processor Jan 23 18:56:21.975608 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 23 18:56:21.975618 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 23 18:56:21.975626 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x10000000000 Jan 23 18:56:21.975635 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 23 18:56:21.975643 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 23 18:56:21.975654 kernel: e820: update [mem 0x48000000-0xffffffff] usable ==> reserved Jan 23 18:56:21.975661 kernel: last_pfn = 0x40000 max_arch_pfn = 0x10000000000 Jan 23 18:56:21.975670 kernel: Using GB pages for direct mapping Jan 23 18:56:21.975678 kernel: ACPI: Early table checksum verification disabled Jan 23 18:56:21.975690 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 23 18:56:21.975699 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:56:21.975709 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:56:21.975718 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 23 18:56:21.975726 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 23 18:56:21.975735 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:56:21.975744 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:56:21.975753 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:56:21.975761 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v05 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 18:56:21.975772 kernel: ACPI: SRAT 0x000000003FFD4000 0000A0 (v03 HVLITE HVLITETB 00000000 MSHV 00000000) Jan 23 18:56:21.975781 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 23 18:56:21.975789 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 23 18:56:21.975798 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 23 18:56:21.975807 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 23 18:56:21.975815 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 23 18:56:21.975824 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 23 18:56:21.975832 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 23 18:56:21.975841 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 23 18:56:21.975851 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd409f] Jan 23 18:56:21.975859 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 23 18:56:21.975868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 23 18:56:21.975877 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] Jan 23 18:56:21.975886 kernel: NUMA: Node 0 [mem 0x00001000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00001000-0x2bfffffff] Jan 23 18:56:21.975895 kernel: NODE_DATA(0) allocated [mem 0x2bfff8dc0-0x2bfffffff] Jan 23 18:56:21.975903 kernel: Zone ranges: Jan 23 18:56:21.975912 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 23 18:56:21.975920 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 23 18:56:21.975931 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 18:56:21.975939 kernel: Device empty Jan 23 18:56:21.975948 kernel: Movable zone start for each node Jan 23 18:56:21.975956 kernel: Early memory node ranges Jan 23 18:56:21.975965 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 23 18:56:21.975973 kernel: node 0: [mem 0x0000000000100000-0x00000000044fdfff] Jan 23 18:56:21.975982 kernel: node 0: [mem 0x00000000048fe000-0x000000003ff1efff] Jan 23 18:56:21.975990 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 23 18:56:21.975999 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 23 18:56:21.976009 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 23 18:56:21.976018 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 23 18:56:21.976026 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 23 18:56:21.976035 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 23 18:56:21.976043 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 23 18:56:21.976052 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 23 18:56:21.976061 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 23 18:56:21.976069 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 23 18:56:21.976078 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 23 18:56:21.976088 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 23 18:56:21.976097 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 23 18:56:21.976105 kernel: TSC deadline timer available Jan 23 18:56:21.976113 kernel: CPU topo: Max. logical packages: 1 Jan 23 18:56:21.976122 kernel: CPU topo: Max. logical dies: 1 Jan 23 18:56:21.976130 kernel: CPU topo: Max. dies per package: 1 Jan 23 18:56:21.976138 kernel: CPU topo: Max. threads per core: 2 Jan 23 18:56:21.976146 kernel: CPU topo: Num. cores per package: 1 Jan 23 18:56:21.976154 kernel: CPU topo: Num. threads per package: 2 Jan 23 18:56:21.976162 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Jan 23 18:56:21.976187 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 23 18:56:21.976196 kernel: Booting paravirtualized kernel on Hyper-V Jan 23 18:56:21.976204 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 23 18:56:21.976212 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 23 18:56:21.976219 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Jan 23 18:56:21.976227 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Jan 23 18:56:21.976235 kernel: pcpu-alloc: [0] 0 1 Jan 23 18:56:21.976242 kernel: Hyper-V: PV spinlocks enabled Jan 23 18:56:21.976252 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 23 18:56:21.976261 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:56:21.976270 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 23 18:56:21.976278 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 18:56:21.976287 kernel: Fallback order for Node 0: 0 Jan 23 18:56:21.976295 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2095807 Jan 23 18:56:21.976303 kernel: Policy zone: Normal Jan 23 18:56:21.976311 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 18:56:21.976319 kernel: software IO TLB: area num 2. Jan 23 18:56:21.976329 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 18:56:21.976337 kernel: ftrace: allocating 40097 entries in 157 pages Jan 23 18:56:21.976345 kernel: ftrace: allocated 157 pages with 5 groups Jan 23 18:56:21.976353 kernel: Dynamic Preempt: voluntary Jan 23 18:56:21.976361 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 18:56:21.976374 kernel: rcu: RCU event tracing is enabled. Jan 23 18:56:21.976392 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 18:56:21.976401 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 18:56:21.976411 kernel: Rude variant of Tasks RCU enabled. Jan 23 18:56:21.976420 kernel: Tracing variant of Tasks RCU enabled. Jan 23 18:56:21.976429 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 18:56:21.976439 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 18:56:21.976448 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:56:21.976457 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:56:21.976466 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 18:56:21.976475 kernel: Using NULL legacy PIC Jan 23 18:56:21.976486 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 23 18:56:21.976495 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 18:56:21.976504 kernel: Console: colour dummy device 80x25 Jan 23 18:56:21.976513 kernel: printk: legacy console [tty1] enabled Jan 23 18:56:21.976523 kernel: printk: legacy console [ttyS0] enabled Jan 23 18:56:21.976531 kernel: printk: legacy bootconsole [earlyser0] disabled Jan 23 18:56:21.976541 kernel: ACPI: Core revision 20240827 Jan 23 18:56:21.976550 kernel: Failed to register legacy timer interrupt Jan 23 18:56:21.976558 kernel: APIC: Switch to symmetric I/O mode setup Jan 23 18:56:21.976569 kernel: x2apic enabled Jan 23 18:56:21.976578 kernel: APIC: Switched APIC routing to: physical x2apic Jan 23 18:56:21.976587 kernel: Hyper-V: Host Build 10.0.26100.1448-1-0 Jan 23 18:56:21.976595 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 23 18:56:21.976604 kernel: Hyper-V: Disabling IBT because of Hyper-V bug Jan 23 18:56:21.976613 kernel: Hyper-V: Using IPI hypercalls Jan 23 18:56:21.976621 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 23 18:56:21.976631 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 23 18:56:21.976640 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 23 18:56:21.976652 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 23 18:56:21.976660 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 23 18:56:21.976668 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 23 18:56:21.976677 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jan 23 18:56:21.976686 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 4599.99 BogoMIPS (lpj=2299999) Jan 23 18:56:21.976694 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jan 23 18:56:21.976704 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 23 18:56:21.976713 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 23 18:56:21.976722 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 23 18:56:21.976733 kernel: Spectre V2 : Mitigation: Retpolines Jan 23 18:56:21.976742 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 23 18:56:21.976751 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 23 18:56:21.976760 kernel: RETBleed: Vulnerable Jan 23 18:56:21.976768 kernel: Speculative Store Bypass: Vulnerable Jan 23 18:56:21.976777 kernel: active return thunk: its_return_thunk Jan 23 18:56:21.976786 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 23 18:56:21.976794 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 23 18:56:21.976803 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 23 18:56:21.976810 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 23 18:56:21.976819 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 23 18:56:21.976829 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 23 18:56:21.976836 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 23 18:56:21.976844 kernel: x86/fpu: Supporting XSAVE feature 0x800: 'Control-flow User registers' Jan 23 18:56:21.976852 kernel: x86/fpu: Supporting XSAVE feature 0x20000: 'AMX Tile config' Jan 23 18:56:21.976861 kernel: x86/fpu: Supporting XSAVE feature 0x40000: 'AMX Tile data' Jan 23 18:56:21.976869 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 23 18:56:21.976877 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 23 18:56:21.976885 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 23 18:56:21.976893 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 23 18:56:21.976901 kernel: x86/fpu: xstate_offset[11]: 2432, xstate_sizes[11]: 16 Jan 23 18:56:21.976910 kernel: x86/fpu: xstate_offset[17]: 2496, xstate_sizes[17]: 64 Jan 23 18:56:21.976920 kernel: x86/fpu: xstate_offset[18]: 2560, xstate_sizes[18]: 8192 Jan 23 18:56:21.976929 kernel: x86/fpu: Enabled xstate features 0x608e7, context size is 10752 bytes, using 'compacted' format. Jan 23 18:56:21.976938 kernel: Freeing SMP alternatives memory: 32K Jan 23 18:56:21.976947 kernel: pid_max: default: 32768 minimum: 301 Jan 23 18:56:21.976956 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 18:56:21.976964 kernel: landlock: Up and running. Jan 23 18:56:21.976972 kernel: SELinux: Initializing. Jan 23 18:56:21.976981 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 18:56:21.976989 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 23 18:56:21.976998 kernel: smpboot: CPU0: Intel INTEL(R) XEON(R) PLATINUM 8573C (family: 0x6, model: 0xcf, stepping: 0x2) Jan 23 18:56:21.977006 kernel: Performance Events: unsupported p6 CPU model 207 no PMU driver, software events only. Jan 23 18:56:21.977015 kernel: signal: max sigframe size: 11952 Jan 23 18:56:21.977025 kernel: rcu: Hierarchical SRCU implementation. Jan 23 18:56:21.977035 kernel: rcu: Max phase no-delay instances is 400. Jan 23 18:56:21.977044 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 18:56:21.977053 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 23 18:56:21.977062 kernel: smp: Bringing up secondary CPUs ... Jan 23 18:56:21.977072 kernel: smpboot: x86: Booting SMP configuration: Jan 23 18:56:21.977081 kernel: .... node #0, CPUs: #1 Jan 23 18:56:21.977089 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 18:56:21.977098 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jan 23 18:56:21.977109 kernel: Memory: 8068832K/8383228K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 308180K reserved, 0K cma-reserved) Jan 23 18:56:21.977118 kernel: devtmpfs: initialized Jan 23 18:56:21.977126 kernel: x86/mm: Memory block size: 128MB Jan 23 18:56:21.977134 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 23 18:56:21.977142 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 18:56:21.977150 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 18:56:21.977158 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 18:56:21.977168 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 18:56:21.979126 kernel: audit: initializing netlink subsys (disabled) Jan 23 18:56:21.979141 kernel: audit: type=2000 audit(1769194579.074:1): state=initialized audit_enabled=0 res=1 Jan 23 18:56:21.979150 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 18:56:21.979159 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 23 18:56:21.979169 kernel: cpuidle: using governor menu Jan 23 18:56:21.979192 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 18:56:21.979201 kernel: dca service started, version 1.12.1 Jan 23 18:56:21.979211 kernel: e820: reserve RAM buffer [mem 0x044fe000-0x07ffffff] Jan 23 18:56:21.979220 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 23 18:56:21.979231 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 23 18:56:21.979240 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 18:56:21.979250 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 18:56:21.979259 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 18:56:21.979268 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 18:56:21.979277 kernel: ACPI: Added _OSI(Module Device) Jan 23 18:56:21.979287 kernel: ACPI: Added _OSI(Processor Device) Jan 23 18:56:21.979295 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 18:56:21.979304 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 18:56:21.979314 kernel: ACPI: Interpreter enabled Jan 23 18:56:21.979322 kernel: ACPI: PM: (supports S0 S5) Jan 23 18:56:21.979332 kernel: ACPI: Using IOAPIC for interrupt routing Jan 23 18:56:21.979341 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 23 18:56:21.979350 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 23 18:56:21.979360 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 23 18:56:21.979369 kernel: iommu: Default domain type: Translated Jan 23 18:56:21.979378 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 23 18:56:21.979387 kernel: efivars: Registered efivars operations Jan 23 18:56:21.979397 kernel: PCI: Using ACPI for IRQ routing Jan 23 18:56:21.979406 kernel: PCI: System does not support PCI Jan 23 18:56:21.979415 kernel: vgaarb: loaded Jan 23 18:56:21.979424 kernel: clocksource: Switched to clocksource tsc-early Jan 23 18:56:21.979433 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 18:56:21.979442 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 18:56:21.979451 kernel: pnp: PnP ACPI init Jan 23 18:56:21.979460 kernel: pnp: PnP ACPI: found 3 devices Jan 23 18:56:21.979468 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 23 18:56:21.979477 kernel: NET: Registered PF_INET protocol family Jan 23 18:56:21.979488 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 23 18:56:21.979497 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 23 18:56:21.979507 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 18:56:21.979517 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 18:56:21.979526 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 23 18:56:21.979534 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 23 18:56:21.979543 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 18:56:21.979552 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 23 18:56:21.979563 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 18:56:21.979572 kernel: NET: Registered PF_XDP protocol family Jan 23 18:56:21.979581 kernel: PCI: CLS 0 bytes, default 64 Jan 23 18:56:21.979591 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 23 18:56:21.979600 kernel: software IO TLB: mapped [mem 0x000000003a9b9000-0x000000003e9b9000] (64MB) Jan 23 18:56:21.979608 kernel: RAPL PMU: API unit is 2^-32 Joules, 1 fixed counters, 10737418240 ms ovfl timer Jan 23 18:56:21.979618 kernel: RAPL PMU: hw unit of domain psys 2^-0 Joules Jan 23 18:56:21.979626 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2127345424d, max_idle_ns: 440795318347 ns Jan 23 18:56:21.979635 kernel: clocksource: Switched to clocksource tsc Jan 23 18:56:21.979645 kernel: Initialise system trusted keyrings Jan 23 18:56:21.979653 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 23 18:56:21.979662 kernel: Key type asymmetric registered Jan 23 18:56:21.979671 kernel: Asymmetric key parser 'x509' registered Jan 23 18:56:21.979680 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 18:56:21.979689 kernel: io scheduler mq-deadline registered Jan 23 18:56:21.979698 kernel: io scheduler kyber registered Jan 23 18:56:21.979707 kernel: io scheduler bfq registered Jan 23 18:56:21.979716 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 23 18:56:21.979727 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 18:56:21.979735 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:56:21.979744 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 23 18:56:21.979754 kernel: serial8250: ttyS2 at I/O 0x3e8 (irq = 4, base_baud = 115200) is a 16550A Jan 23 18:56:21.979763 kernel: i8042: PNP: No PS/2 controller found. Jan 23 18:56:21.979902 kernel: rtc_cmos 00:02: registered as rtc0 Jan 23 18:56:21.979982 kernel: rtc_cmos 00:02: setting system clock to 2026-01-23T18:56:21 UTC (1769194581) Jan 23 18:56:21.980055 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 23 18:56:21.980068 kernel: intel_pstate: Intel P-state driver initializing Jan 23 18:56:21.980078 kernel: efifb: probing for efifb Jan 23 18:56:21.980088 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 23 18:56:21.980097 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 23 18:56:21.980106 kernel: efifb: scrolling: redraw Jan 23 18:56:21.980115 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 23 18:56:21.980149 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 18:56:21.980159 kernel: fb0: EFI VGA frame buffer device Jan 23 18:56:21.980168 kernel: pstore: Using crash dump compression: deflate Jan 23 18:56:21.980192 kernel: pstore: Registered efi_pstore as persistent store backend Jan 23 18:56:21.980201 kernel: NET: Registered PF_INET6 protocol family Jan 23 18:56:21.980210 kernel: Segment Routing with IPv6 Jan 23 18:56:21.980219 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 18:56:21.980227 kernel: NET: Registered PF_PACKET protocol family Jan 23 18:56:21.980237 kernel: Key type dns_resolver registered Jan 23 18:56:21.980246 kernel: IPI shorthand broadcast: enabled Jan 23 18:56:21.980255 kernel: sched_clock: Marking stable (3144005954, 96506448)->(3609683354, -369170952) Jan 23 18:56:21.980264 kernel: registered taskstats version 1 Jan 23 18:56:21.980276 kernel: Loading compiled-in X.509 certificates Jan 23 18:56:21.980285 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 2aec04a968f0111235eb989789145bc2b989f0c6' Jan 23 18:56:21.980293 kernel: Demotion targets for Node 0: null Jan 23 18:56:21.980301 kernel: Key type .fscrypt registered Jan 23 18:56:21.980310 kernel: Key type fscrypt-provisioning registered Jan 23 18:56:21.980319 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 18:56:21.980328 kernel: ima: Allocated hash algorithm: sha1 Jan 23 18:56:21.980337 kernel: ima: No architecture policies found Jan 23 18:56:21.980346 kernel: clk: Disabling unused clocks Jan 23 18:56:21.980357 kernel: Warning: unable to open an initial console. Jan 23 18:56:21.980366 kernel: Freeing unused kernel image (initmem) memory: 46200K Jan 23 18:56:21.980375 kernel: Write protecting the kernel read-only data: 40960k Jan 23 18:56:21.980383 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Jan 23 18:56:21.980392 kernel: Run /init as init process Jan 23 18:56:21.980402 kernel: with arguments: Jan 23 18:56:21.980410 kernel: /init Jan 23 18:56:21.980419 kernel: with environment: Jan 23 18:56:21.980428 kernel: HOME=/ Jan 23 18:56:21.980439 kernel: TERM=linux Jan 23 18:56:21.980449 systemd[1]: Successfully made /usr/ read-only. Jan 23 18:56:21.980461 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:56:21.980471 systemd[1]: Detected virtualization microsoft. Jan 23 18:56:21.980481 systemd[1]: Detected architecture x86-64. Jan 23 18:56:21.980491 systemd[1]: Running in initrd. Jan 23 18:56:21.980500 systemd[1]: No hostname configured, using default hostname. Jan 23 18:56:21.980512 systemd[1]: Hostname set to . Jan 23 18:56:21.980521 systemd[1]: Initializing machine ID from random generator. Jan 23 18:56:21.980531 systemd[1]: Queued start job for default target initrd.target. Jan 23 18:56:21.980540 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:56:21.980549 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:56:21.980559 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 18:56:21.980568 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:56:21.980577 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 18:56:21.980590 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 18:56:21.980601 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 18:56:21.980611 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 18:56:21.980621 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:56:21.980630 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:56:21.980640 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:56:21.980649 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:56:21.980660 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:56:21.980669 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:56:21.980678 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:56:21.980687 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:56:21.980696 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 18:56:21.980705 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 18:56:21.980715 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:56:21.980723 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:56:21.980732 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:56:21.980743 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:56:21.980752 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 18:56:21.980761 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:56:21.980769 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 18:56:21.980779 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 18:56:21.980788 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 18:56:21.980796 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:56:21.980806 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:56:21.980824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:21.980851 systemd-journald[186]: Collecting audit messages is disabled. Jan 23 18:56:21.980878 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 18:56:21.980890 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:56:21.980902 systemd-journald[186]: Journal started Jan 23 18:56:21.980928 systemd-journald[186]: Runtime Journal (/run/log/journal/9a785fc8054b49ddb5f306293bd00179) is 8M, max 158.6M, 150.6M free. Jan 23 18:56:21.989002 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:56:21.991708 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 18:56:21.995661 systemd-modules-load[188]: Inserted module 'overlay' Jan 23 18:56:21.998994 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 18:56:22.007287 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:56:22.012860 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:22.021376 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 18:56:22.034160 systemd-tmpfiles[200]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 18:56:22.038937 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 18:56:22.043112 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 18:56:22.047011 systemd-modules-load[188]: Inserted module 'br_netfilter' Jan 23 18:56:22.047612 kernel: Bridge firewalling registered Jan 23 18:56:22.049278 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:56:22.049676 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:56:22.049988 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:56:22.052289 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:56:22.073149 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:56:22.075828 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:56:22.080407 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:56:22.084787 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 18:56:22.090770 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:56:22.109658 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=e498a861432c458392bc8ae0919597d8f4554cdcc46b00c7f3d7a634c3492c81 Jan 23 18:56:22.132941 systemd-resolved[226]: Positive Trust Anchors: Jan 23 18:56:22.134704 systemd-resolved[226]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:56:22.137955 systemd-resolved[226]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:56:22.154466 systemd-resolved[226]: Defaulting to hostname 'linux'. Jan 23 18:56:22.157682 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:56:22.160302 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:56:22.194191 kernel: SCSI subsystem initialized Jan 23 18:56:22.202188 kernel: Loading iSCSI transport class v2.0-870. Jan 23 18:56:22.211190 kernel: iscsi: registered transport (tcp) Jan 23 18:56:22.230455 kernel: iscsi: registered transport (qla4xxx) Jan 23 18:56:22.230499 kernel: QLogic iSCSI HBA Driver Jan 23 18:56:22.244729 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:56:22.253373 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:56:22.259524 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:56:22.290481 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 18:56:22.294348 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 18:56:22.346192 kernel: raid6: avx512x4 gen() 44301 MB/s Jan 23 18:56:22.364195 kernel: raid6: avx512x2 gen() 43470 MB/s Jan 23 18:56:22.381185 kernel: raid6: avx512x1 gen() 24955 MB/s Jan 23 18:56:22.399186 kernel: raid6: avx2x4 gen() 34281 MB/s Jan 23 18:56:22.417187 kernel: raid6: avx2x2 gen() 36021 MB/s Jan 23 18:56:22.434867 kernel: raid6: avx2x1 gen() 30559 MB/s Jan 23 18:56:22.434950 kernel: raid6: using algorithm avx512x4 gen() 44301 MB/s Jan 23 18:56:22.453743 kernel: raid6: .... xor() 7825 MB/s, rmw enabled Jan 23 18:56:22.453830 kernel: raid6: using avx512x2 recovery algorithm Jan 23 18:56:22.473202 kernel: xor: automatically using best checksumming function avx Jan 23 18:56:22.599202 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 18:56:22.604082 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:56:22.605774 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:56:22.628320 systemd-udevd[435]: Using default interface naming scheme 'v255'. Jan 23 18:56:22.632578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:56:22.639850 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 18:56:22.660860 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Jan 23 18:56:22.680290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:56:22.683293 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:56:22.715026 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:56:22.725887 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 18:56:22.765189 kernel: cryptd: max_cpu_qlen set to 1000 Jan 23 18:56:22.795202 kernel: hv_vmbus: Vmbus version:5.3 Jan 23 18:56:22.794837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:56:22.795112 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:22.802391 kernel: AES CTR mode by8 optimization enabled Jan 23 18:56:22.802128 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:22.806587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:22.814644 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 23 18:56:22.814678 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 23 18:56:22.831200 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 23 18:56:22.844334 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 18:56:22.849023 kernel: PTP clock support registered Jan 23 18:56:22.849055 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 23 18:56:22.854898 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:22.866401 kernel: hv_vmbus: registering driver hv_netvsc Jan 23 18:56:22.866436 kernel: hv_vmbus: registering driver hid_hyperv Jan 23 18:56:22.870782 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 23 18:56:22.870818 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 23 18:56:22.878194 kernel: hv_vmbus: registering driver hv_pci Jan 23 18:56:22.886212 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad6ad3e (unnamed net_device) (uninitialized): VF slot 1 added Jan 23 18:56:22.900199 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI VMBus probing: Using version 0x10004 Jan 23 18:56:22.906194 kernel: hv_vmbus: registering driver hv_storvsc Jan 23 18:56:22.909201 kernel: scsi host0: storvsc_host_t Jan 23 18:56:22.909423 kernel: hv_pci 7ad35d50-c05b-47ab-b3a0-56a9a845852b: PCI host bridge to bus c05b:00 Jan 23 18:56:22.911983 kernel: pci_bus c05b:00: root bus resource [mem 0xfc0000000-0xfc007ffff window] Jan 23 18:56:22.913593 kernel: pci_bus c05b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 18:56:22.913751 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 23 18:56:22.930485 kernel: pci c05b:00:00.0: [1414:00a9] type 00 class 0x010802 PCIe Endpoint Jan 23 18:56:22.930535 kernel: hv_utils: Registering HyperV Utility Driver Jan 23 18:56:22.930548 kernel: hv_vmbus: registering driver hv_utils Jan 23 18:56:22.930559 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit] Jan 23 18:56:22.936246 kernel: hv_utils: Shutdown IC version 3.2 Jan 23 18:56:22.942195 kernel: pci_bus c05b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 18:56:22.942510 kernel: pci c05b:00:00.0: BAR 0 [mem 0xfc0000000-0xfc007ffff 64bit]: assigned Jan 23 18:56:22.947587 kernel: hv_utils: Heartbeat IC version 3.0 Jan 23 18:56:22.947620 kernel: hv_utils: TimeSync IC version 4.0 Jan 23 18:56:23.417097 systemd-resolved[226]: Clock change detected. Flushing caches. Jan 23 18:56:23.434075 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 23 18:56:23.434229 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 23 18:56:23.438998 kernel: nvme nvme0: pci function c05b:00:00.0 Jan 23 18:56:23.439153 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 23 18:56:23.439265 kernel: nvme c05b:00:00.0: enabling device (0000 -> 0002) Jan 23 18:56:23.592712 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 18:56:23.597696 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 18:56:23.611699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#204 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 18:56:23.628696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#28 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 18:56:23.838706 kernel: nvme nvme0: using unchecked data buffer Jan 23 18:56:24.019551 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - MSFT NVMe Accelerator v1.0 EFI-SYSTEM. Jan 23 18:56:24.042691 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 18:56:24.077892 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - MSFT NVMe Accelerator v1.0 ROOT. Jan 23 18:56:24.210986 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 18:56:24.216763 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - MSFT NVMe Accelerator v1.0 USR-A. Jan 23 18:56:24.219960 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 18:56:24.223486 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:56:24.226648 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:56:24.230802 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:56:24.234327 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 18:56:24.238776 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 18:56:24.261534 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:56:24.272701 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 18:56:24.416706 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI VMBus probing: Using version 0x10004 Jan 23 18:56:24.425697 kernel: hv_pci 00000001-7870-47b5-b203-907d12ca697e: PCI host bridge to bus 7870:00 Jan 23 18:56:24.425872 kernel: pci_bus 7870:00: root bus resource [mem 0xfc2000000-0xfc4007fff window] Jan 23 18:56:24.425993 kernel: pci_bus 7870:00: No busn resource found for root bus, will use [bus 00-ff] Jan 23 18:56:24.433053 kernel: pci 7870:00:00.0: [1414:00ba] type 00 class 0x020000 PCIe Endpoint Jan 23 18:56:24.438872 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref] Jan 23 18:56:24.444913 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref] Jan 23 18:56:24.446780 kernel: pci 7870:00:00.0: enabling Extended Tags Jan 23 18:56:24.469323 kernel: pci_bus 7870:00: busn_res: [bus 00-ff] end is updated to 00 Jan 23 18:56:24.469454 kernel: pci 7870:00:00.0: BAR 0 [mem 0xfc2000000-0xfc3ffffff 64bit pref]: assigned Jan 23 18:56:24.469555 kernel: pci 7870:00:00.0: BAR 4 [mem 0xfc4000000-0xfc4007fff 64bit pref]: assigned Jan 23 18:56:24.472904 kernel: mana 7870:00:00.0: enabling device (0000 -> 0002) Jan 23 18:56:24.483704 kernel: mana 7870:00:00.0: Microsoft Azure Network Adapter protocol version: 0.1.1 Jan 23 18:56:24.486769 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad6ad3e eth0: VF registering: eth1 Jan 23 18:56:24.486930 kernel: mana 7870:00:00.0 eth1: joined to eth0 Jan 23 18:56:24.490845 kernel: mana 7870:00:00.0 enP30832s1: renamed from eth1 Jan 23 18:56:25.281733 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 18:56:25.282968 disk-uuid[661]: The operation has completed successfully. Jan 23 18:56:25.339487 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 18:56:25.339579 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 18:56:25.378861 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 18:56:25.389488 sh[700]: Success Jan 23 18:56:25.419882 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 18:56:25.419935 kernel: device-mapper: uevent: version 1.0.3 Jan 23 18:56:25.421452 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 18:56:25.430695 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jan 23 18:56:25.694972 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 18:56:25.700710 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 18:56:25.714120 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 18:56:25.726701 kernel: BTRFS: device fsid 4711e7dc-9586-49d4-8dcc-466f082e7841 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (713) Jan 23 18:56:25.728688 kernel: BTRFS info (device dm-0): first mount of filesystem 4711e7dc-9586-49d4-8dcc-466f082e7841 Jan 23 18:56:25.728768 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:26.032299 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 18:56:26.032390 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 18:56:26.034184 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 18:56:26.049171 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 18:56:26.050091 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:56:26.050206 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 18:56:26.050976 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 18:56:26.060794 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 18:56:26.098692 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (748) Jan 23 18:56:26.102532 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:26.102571 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:26.121718 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:56:26.121758 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:56:26.122816 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:56:26.127712 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:26.129139 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 18:56:26.135628 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 18:56:26.149772 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:56:26.154359 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:56:26.189625 systemd-networkd[882]: lo: Link UP Jan 23 18:56:26.189634 systemd-networkd[882]: lo: Gained carrier Jan 23 18:56:26.196298 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 18:56:26.190630 systemd-networkd[882]: Enumeration completed Jan 23 18:56:26.191412 systemd-networkd[882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:56:26.204287 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:56:26.204433 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad6ad3e eth0: Data path switched to VF: enP30832s1 Jan 23 18:56:26.191415 systemd-networkd[882]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:56:26.191758 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:56:26.192308 systemd[1]: Reached target network.target - Network. Jan 23 18:56:26.202090 systemd-networkd[882]: enP30832s1: Link UP Jan 23 18:56:26.202170 systemd-networkd[882]: eth0: Link UP Jan 23 18:56:26.202315 systemd-networkd[882]: eth0: Gained carrier Jan 23 18:56:26.202326 systemd-networkd[882]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:56:26.206739 systemd-networkd[882]: enP30832s1: Gained carrier Jan 23 18:56:26.217728 systemd-networkd[882]: eth0: DHCPv4 address 10.200.4.15/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 23 18:56:27.555335 ignition[865]: Ignition 2.22.0 Jan 23 18:56:27.555348 ignition[865]: Stage: fetch-offline Jan 23 18:56:27.555462 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:27.555470 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:56:27.555568 ignition[865]: parsed url from cmdline: "" Jan 23 18:56:27.555571 ignition[865]: no config URL provided Jan 23 18:56:27.555577 ignition[865]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:56:27.562444 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:56:27.555582 ignition[865]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:56:27.567846 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 18:56:27.555587 ignition[865]: failed to fetch config: resource requires networking Jan 23 18:56:27.558063 ignition[865]: Ignition finished successfully Jan 23 18:56:27.590904 ignition[892]: Ignition 2.22.0 Jan 23 18:56:27.590914 ignition[892]: Stage: fetch Jan 23 18:56:27.591154 ignition[892]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:27.591163 ignition[892]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:56:27.591248 ignition[892]: parsed url from cmdline: "" Jan 23 18:56:27.591251 ignition[892]: no config URL provided Jan 23 18:56:27.591261 ignition[892]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 18:56:27.591268 ignition[892]: no config at "/usr/lib/ignition/user.ign" Jan 23 18:56:27.591289 ignition[892]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 23 18:56:27.684575 ignition[892]: GET result: OK Jan 23 18:56:27.684671 ignition[892]: config has been read from IMDS userdata Jan 23 18:56:27.684723 ignition[892]: parsing config with SHA512: 1d8cf7216566ece205cb4b4cd3a392f9dc2b10b21ee24cff7c4827abd2552e5fc590e30fea848209616f1ef1ba0a88580f563667a5480e093d859e27e674d46d Jan 23 18:56:27.689162 unknown[892]: fetched base config from "system" Jan 23 18:56:27.689179 unknown[892]: fetched base config from "system" Jan 23 18:56:27.689808 ignition[892]: fetch: fetch complete Jan 23 18:56:27.689186 unknown[892]: fetched user config from "azure" Jan 23 18:56:27.689814 ignition[892]: fetch: fetch passed Jan 23 18:56:27.692474 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 18:56:27.689853 ignition[892]: Ignition finished successfully Jan 23 18:56:27.696623 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 18:56:27.725351 ignition[898]: Ignition 2.22.0 Jan 23 18:56:27.725361 ignition[898]: Stage: kargs Jan 23 18:56:27.727989 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 18:56:27.725568 ignition[898]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:27.733572 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 18:56:27.725576 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:56:27.726729 ignition[898]: kargs: kargs passed Jan 23 18:56:27.726765 ignition[898]: Ignition finished successfully Jan 23 18:56:27.760264 ignition[904]: Ignition 2.22.0 Jan 23 18:56:27.760273 ignition[904]: Stage: disks Jan 23 18:56:27.760475 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:27.763188 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 18:56:27.760483 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:56:27.764260 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 18:56:27.761608 ignition[904]: disks: disks passed Jan 23 18:56:27.768813 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 18:56:27.761646 ignition[904]: Ignition finished successfully Jan 23 18:56:27.769066 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:56:27.769092 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:56:27.769114 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:56:27.772545 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 18:56:27.843321 systemd-fsck[912]: ROOT: clean, 15/7326000 files, 477845/7359488 blocks Jan 23 18:56:27.847480 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 18:56:27.852842 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 18:56:27.896856 systemd-networkd[882]: eth0: Gained IPv6LL Jan 23 18:56:28.122699 kernel: EXT4-fs (nvme0n1p9): mounted filesystem dcb97a38-a4f5-43e7-bcb0-85a5c1e2a446 r/w with ordered data mode. Quota mode: none. Jan 23 18:56:28.122731 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 18:56:28.123502 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 18:56:28.137479 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:56:28.153761 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 18:56:28.158813 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 23 18:56:28.165531 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 18:56:28.178280 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (921) Jan 23 18:56:28.178307 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:28.178318 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:28.165558 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:56:28.181830 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 18:56:28.183771 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 18:56:28.191147 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:56:28.191181 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:56:28.192227 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:56:28.193821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:56:28.625176 coreos-metadata[923]: Jan 23 18:56:28.625 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 18:56:28.628955 coreos-metadata[923]: Jan 23 18:56:28.628 INFO Fetch successful Jan 23 18:56:28.631782 coreos-metadata[923]: Jan 23 18:56:28.629 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 23 18:56:28.638223 coreos-metadata[923]: Jan 23 18:56:28.638 INFO Fetch successful Jan 23 18:56:28.652893 coreos-metadata[923]: Jan 23 18:56:28.652 INFO wrote hostname ci-4459.2.3-a-2aee31126f to /sysroot/etc/hostname Jan 23 18:56:28.656261 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:56:28.858706 initrd-setup-root[952]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 18:56:28.909222 initrd-setup-root[959]: cut: /sysroot/etc/group: No such file or directory Jan 23 18:56:28.914708 initrd-setup-root[966]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 18:56:28.919332 initrd-setup-root[973]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 18:56:29.863712 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 18:56:29.868477 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 18:56:29.882841 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 18:56:29.891766 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 18:56:29.897749 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:29.921329 ignition[1040]: INFO : Ignition 2.22.0 Jan 23 18:56:29.922612 ignition[1040]: INFO : Stage: mount Jan 23 18:56:29.922612 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:29.922612 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:56:29.937817 ignition[1040]: INFO : mount: mount passed Jan 23 18:56:29.937817 ignition[1040]: INFO : Ignition finished successfully Jan 23 18:56:29.923753 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 18:56:29.931937 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 18:56:29.940198 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 18:56:29.956915 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 18:56:29.979455 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 (259:5) scanned by mount (1052) Jan 23 18:56:29.979559 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem a15cc984-6718-480b-8520-c0d724ebf6fe Jan 23 18:56:29.981078 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jan 23 18:56:29.988151 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 18:56:29.988184 kernel: BTRFS info (device nvme0n1p6): turning on async discard Jan 23 18:56:29.989423 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 18:56:29.991391 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 18:56:30.017613 ignition[1069]: INFO : Ignition 2.22.0 Jan 23 18:56:30.017613 ignition[1069]: INFO : Stage: files Jan 23 18:56:30.021724 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:30.021724 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:56:30.021724 ignition[1069]: DEBUG : files: compiled without relabeling support, skipping Jan 23 18:56:30.035427 ignition[1069]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 18:56:30.035427 ignition[1069]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 18:56:30.067703 ignition[1069]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 18:56:30.069388 ignition[1069]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 18:56:30.069388 ignition[1069]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 18:56:30.068022 unknown[1069]: wrote ssh authorized keys file for user: core Jan 23 18:56:30.101899 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:56:30.105739 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 23 18:56:30.163305 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 18:56:30.284977 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 23 18:56:30.289786 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 18:56:30.289786 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 18:56:30.289786 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:56:30.289786 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 18:56:30.289786 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:56:30.289786 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 18:56:30.289786 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:56:30.289786 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 18:56:30.316751 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:56:30.316751 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 18:56:30.316751 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:56:30.316751 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:56:30.316751 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:56:30.316751 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 23 18:56:30.539733 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 18:56:31.130214 ignition[1069]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 23 18:56:31.134781 ignition[1069]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 18:56:31.157663 ignition[1069]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:56:31.166919 ignition[1069]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 18:56:31.166919 ignition[1069]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 18:56:31.166919 ignition[1069]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 18:56:31.173770 ignition[1069]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 18:56:31.173770 ignition[1069]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:56:31.173770 ignition[1069]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 18:56:31.173770 ignition[1069]: INFO : files: files passed Jan 23 18:56:31.173770 ignition[1069]: INFO : Ignition finished successfully Jan 23 18:56:31.172588 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 18:56:31.187553 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 18:56:31.192459 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 18:56:31.200860 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 18:56:31.200947 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 18:56:31.219703 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:31.219703 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:31.228024 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 18:56:31.224238 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:56:31.228349 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 18:56:31.232492 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 18:56:31.265339 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 18:56:31.265427 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 18:56:31.269908 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 18:56:31.271877 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 18:56:31.271940 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 18:56:31.272579 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 18:56:31.294750 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:56:31.300600 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 18:56:31.313582 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:56:31.313769 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:56:31.313960 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 18:56:31.314225 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 18:56:31.314326 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 18:56:31.326790 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 18:56:31.330840 systemd[1]: Stopped target basic.target - Basic System. Jan 23 18:56:31.334827 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 18:56:31.337863 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 18:56:31.341039 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 18:56:31.343900 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 18:56:31.347457 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 18:56:31.353173 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 18:56:31.355316 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 18:56:31.359200 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 18:56:31.376793 systemd[1]: Stopped target swap.target - Swaps. Jan 23 18:56:31.379778 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 18:56:31.379889 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 18:56:31.384037 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:56:31.387836 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:56:31.392805 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 18:56:31.394070 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:56:31.398957 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 18:56:31.399088 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 18:56:31.408201 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 18:56:31.408703 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 18:56:31.414436 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 18:56:31.414549 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 18:56:31.416178 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 23 18:56:31.416266 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 23 18:56:31.418878 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 18:56:31.419125 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 18:56:31.419250 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:56:31.420845 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 18:56:31.420960 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 18:56:31.421075 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:56:31.421370 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 18:56:31.423806 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 18:56:31.472196 ignition[1123]: INFO : Ignition 2.22.0 Jan 23 18:56:31.472196 ignition[1123]: INFO : Stage: umount Jan 23 18:56:31.472196 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 18:56:31.472196 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 23 18:56:31.472196 ignition[1123]: INFO : umount: umount passed Jan 23 18:56:31.472196 ignition[1123]: INFO : Ignition finished successfully Jan 23 18:56:31.438468 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 18:56:31.438558 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 18:56:31.459097 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 18:56:31.459181 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 18:56:31.459631 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 18:56:31.459724 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 18:56:31.459823 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 18:56:31.459856 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 18:56:31.460045 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 18:56:31.460075 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 18:56:31.460343 systemd[1]: Stopped target network.target - Network. Jan 23 18:56:31.460373 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 18:56:31.460412 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 18:56:31.460646 systemd[1]: Stopped target paths.target - Path Units. Jan 23 18:56:31.460671 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 18:56:31.464948 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:56:31.472737 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 18:56:31.473905 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 18:56:31.475974 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 18:56:31.476014 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 18:56:31.481353 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 18:56:31.481387 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 18:56:31.508470 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 18:56:31.508530 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 18:56:31.512754 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 18:56:31.512793 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 18:56:31.515046 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 18:56:31.519791 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 18:56:31.526951 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 18:56:31.527518 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 18:56:31.527624 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 18:56:31.533401 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 18:56:31.533577 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 18:56:31.533669 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 18:56:31.540288 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 18:56:31.541117 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 18:56:31.555771 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 18:56:31.555813 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:56:31.560749 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 18:56:31.564749 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 18:56:31.566545 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 18:56:31.570803 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 18:56:31.570847 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:56:31.574721 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 18:56:31.574754 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 18:56:31.585757 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 18:56:31.586995 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:56:31.591813 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:56:31.594572 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 18:56:31.594620 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:56:31.610696 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad6ad3e eth0: Data path switched from VF: enP30832s1 Jan 23 18:56:31.610870 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:56:31.615110 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 18:56:31.615246 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:56:31.618446 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 18:56:31.618525 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 18:56:31.621531 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 18:56:31.621574 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 18:56:31.624828 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 18:56:31.624849 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:56:31.627729 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 18:56:31.627769 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 18:56:31.631959 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 18:56:31.632002 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 18:56:31.635778 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 18:56:31.635817 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 18:56:31.640796 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 18:56:31.644728 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 18:56:31.644782 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:56:31.645256 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 18:56:31.645293 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:56:31.646632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:56:31.646670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:31.654829 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jan 23 18:56:31.654884 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jan 23 18:56:31.654922 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:56:31.689369 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 18:56:31.689442 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 18:56:31.895268 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 18:56:31.895378 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 18:56:31.899156 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 18:56:31.904733 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 18:56:31.904797 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 18:56:31.907729 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 18:56:31.926320 systemd[1]: Switching root. Jan 23 18:56:32.027033 systemd-journald[186]: Journal stopped Jan 23 18:56:36.094633 systemd-journald[186]: Received SIGTERM from PID 1 (systemd). Jan 23 18:56:36.094664 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 18:56:36.094691 kernel: SELinux: policy capability open_perms=1 Jan 23 18:56:36.094701 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 18:56:36.094709 kernel: SELinux: policy capability always_check_network=0 Jan 23 18:56:36.094718 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 18:56:36.094727 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 18:56:36.094736 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 18:56:36.094747 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 18:56:36.094756 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 18:56:36.094766 kernel: audit: type=1403 audit(1769194593.012:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 18:56:36.094777 systemd[1]: Successfully loaded SELinux policy in 151.785ms. Jan 23 18:56:36.094788 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.208ms. Jan 23 18:56:36.094799 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 18:56:36.094812 systemd[1]: Detected virtualization microsoft. Jan 23 18:56:36.094821 systemd[1]: Detected architecture x86-64. Jan 23 18:56:36.094830 systemd[1]: Detected first boot. Jan 23 18:56:36.094841 systemd[1]: Hostname set to . Jan 23 18:56:36.094851 systemd[1]: Initializing machine ID from random generator. Jan 23 18:56:36.094862 zram_generator::config[1166]: No configuration found. Jan 23 18:56:36.094875 kernel: Guest personality initialized and is inactive Jan 23 18:56:36.094886 kernel: VMCI host device registered (name=vmci, major=10, minor=259) Jan 23 18:56:36.094895 kernel: Initialized host personality Jan 23 18:56:36.094904 kernel: NET: Registered PF_VSOCK protocol family Jan 23 18:56:36.094913 systemd[1]: Populated /etc with preset unit settings. Jan 23 18:56:36.094925 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 18:56:36.094935 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 18:56:36.094945 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 18:56:36.094957 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 18:56:36.094967 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 18:56:36.094977 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 18:56:36.094987 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 18:56:36.094996 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 18:56:36.095006 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 18:56:36.095016 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 18:56:36.095029 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 18:56:36.095039 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 18:56:36.095050 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 18:56:36.095059 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 18:56:36.095069 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 18:56:36.095082 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 18:56:36.095092 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 18:56:36.095104 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 18:56:36.095117 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 18:56:36.095128 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 18:56:36.095138 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 18:56:36.095148 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 18:56:36.095157 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 18:56:36.095168 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 18:56:36.095178 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 18:56:36.095189 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 18:56:36.095199 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 18:56:36.095209 systemd[1]: Reached target slices.target - Slice Units. Jan 23 18:56:36.095219 systemd[1]: Reached target swap.target - Swaps. Jan 23 18:56:36.095229 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 18:56:36.095239 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 18:56:36.095251 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 18:56:36.095261 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 18:56:36.095271 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 18:56:36.095280 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 18:56:36.095291 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 18:56:36.095300 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 18:56:36.095310 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 18:56:36.095323 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 18:56:36.095333 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:36.095342 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 18:56:36.095353 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 18:56:36.095364 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 18:56:36.095376 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 18:56:36.095387 systemd[1]: Reached target machines.target - Containers. Jan 23 18:56:36.095397 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 18:56:36.095407 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:56:36.095416 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 18:56:36.095423 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 18:56:36.095430 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:56:36.095440 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:56:36.095451 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:56:36.095462 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 18:56:36.095473 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:56:36.095484 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 18:56:36.095493 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 18:56:36.095500 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 18:56:36.095507 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 18:56:36.095514 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 18:56:36.095525 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:56:36.095535 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 18:56:36.095545 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 18:56:36.095556 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 18:56:36.095568 kernel: loop: module loaded Jan 23 18:56:36.095578 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 18:56:36.095587 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 18:56:36.095594 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 18:56:36.095600 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 18:56:36.095607 systemd[1]: Stopped verity-setup.service. Jan 23 18:56:36.095614 kernel: fuse: init (API version 7.41) Jan 23 18:56:36.095620 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:36.095644 systemd-journald[1256]: Collecting audit messages is disabled. Jan 23 18:56:36.095663 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 18:56:36.095670 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 18:56:36.095730 systemd-journald[1256]: Journal started Jan 23 18:56:36.095762 systemd-journald[1256]: Runtime Journal (/run/log/journal/de1ca0e82b25429bb29b6f1d8d23ffef) is 8M, max 158.6M, 150.6M free. Jan 23 18:56:35.636344 systemd[1]: Queued start job for default target multi-user.target. Jan 23 18:56:35.648239 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 18:56:35.648569 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 18:56:36.101974 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 18:56:36.104555 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 18:56:36.106863 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 18:56:36.109308 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 18:56:36.111588 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 18:56:36.113838 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 18:56:36.134841 kernel: ACPI: bus type drm_connector registered Jan 23 18:56:36.118234 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 18:56:36.121124 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 18:56:36.121271 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 18:56:36.124044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:56:36.124232 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:56:36.127080 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:56:36.127228 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:56:36.130452 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 18:56:36.131831 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 18:56:36.134886 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:56:36.135560 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:56:36.139024 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:56:36.139167 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:56:36.141930 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 18:56:36.143583 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 18:56:36.146945 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 18:56:36.157734 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 18:56:36.170409 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 18:56:36.175037 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 18:56:36.178775 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 18:56:36.179744 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 18:56:36.183273 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 18:56:36.196842 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 18:56:36.199156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:56:36.201797 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 18:56:36.204932 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 18:56:36.207478 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:56:36.209910 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 18:56:36.213817 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:56:36.215359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 18:56:36.217939 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 18:56:36.223803 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 18:56:36.227700 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 18:56:36.231085 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 18:56:36.233542 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 18:56:36.236724 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 18:56:36.262961 systemd-journald[1256]: Time spent on flushing to /var/log/journal/de1ca0e82b25429bb29b6f1d8d23ffef is 14.134ms for 987 entries. Jan 23 18:56:36.262961 systemd-journald[1256]: System Journal (/var/log/journal/de1ca0e82b25429bb29b6f1d8d23ffef) is 8M, max 2.6G, 2.6G free. Jan 23 18:56:36.304323 systemd-journald[1256]: Received client request to flush runtime journal. Jan 23 18:56:36.304361 kernel: loop0: detected capacity change from 0 to 110984 Jan 23 18:56:36.269435 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 18:56:36.271980 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 18:56:36.277806 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 18:56:36.305334 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 18:56:36.335554 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 18:56:36.345721 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 18:56:36.405235 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 18:56:36.410808 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 18:56:36.445034 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jan 23 18:56:36.445050 systemd-tmpfiles[1324]: ACLs are not supported, ignoring. Jan 23 18:56:36.447288 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 18:56:36.629853 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 18:56:36.647709 kernel: loop1: detected capacity change from 0 to 219144 Jan 23 18:56:36.649760 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 18:56:36.706702 kernel: loop2: detected capacity change from 0 to 27936 Jan 23 18:56:36.840742 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 18:56:36.846847 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 18:56:36.872358 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Jan 23 18:56:37.041501 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 18:56:37.047615 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 18:56:37.099693 kernel: loop3: detected capacity change from 0 to 128560 Jan 23 18:56:37.112810 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 18:56:37.128901 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 18:56:37.216702 kernel: mousedev: PS/2 mouse device common for all mice Jan 23 18:56:37.225805 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 18:56:37.239704 kernel: hv_vmbus: registering driver hv_balloon Jan 23 18:56:37.244699 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 23 18:56:37.244751 kernel: hv_vmbus: registering driver hyperv_fb Jan 23 18:56:37.250111 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 23 18:56:37.250158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#54 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 23 18:56:37.256501 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 23 18:56:37.256555 kernel: Console: switching to colour dummy device 80x25 Jan 23 18:56:37.261710 kernel: Console: switching to colour frame buffer device 128x48 Jan 23 18:56:37.423670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:37.423875 systemd-networkd[1345]: lo: Link UP Jan 23 18:56:37.423879 systemd-networkd[1345]: lo: Gained carrier Jan 23 18:56:37.426185 systemd-networkd[1345]: Enumeration completed Jan 23 18:56:37.426344 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 18:56:37.426477 systemd-networkd[1345]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:56:37.426481 systemd-networkd[1345]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:56:37.436717 kernel: mana 7870:00:00.0 enP30832s1: Configured vPort 0 PD 18 DB 16 Jan 23 18:56:37.434861 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 18:56:37.440008 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 18:56:37.446616 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:56:37.447950 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:37.454695 kernel: mana 7870:00:00.0 enP30832s1: Configured steering vPort 0 entries 64 Jan 23 18:56:37.457705 kernel: hv_netvsc f8615163-0000-1000-2000-000d3ad6ad3e eth0: Data path switched to VF: enP30832s1 Jan 23 18:56:37.458168 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:37.462736 kernel: loop4: detected capacity change from 0 to 110984 Jan 23 18:56:37.469031 systemd-networkd[1345]: enP30832s1: Link UP Jan 23 18:56:37.469119 systemd-networkd[1345]: eth0: Link UP Jan 23 18:56:37.469122 systemd-networkd[1345]: eth0: Gained carrier Jan 23 18:56:37.469140 systemd-networkd[1345]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:56:37.474933 systemd-networkd[1345]: enP30832s1: Gained carrier Jan 23 18:56:37.480764 kernel: loop5: detected capacity change from 0 to 219144 Jan 23 18:56:37.480749 systemd-networkd[1345]: eth0: DHCPv4 address 10.200.4.15/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 23 18:56:37.510316 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 18:56:37.526716 kernel: loop6: detected capacity change from 0 to 27936 Jan 23 18:56:37.534310 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 18:56:37.536108 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:37.554347 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 18:56:37.557695 kernel: loop7: detected capacity change from 0 to 128560 Jan 23 18:56:37.571981 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - MSFT NVMe Accelerator v1.0 OEM. Jan 23 18:56:37.582962 (sd-merge)[1412]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 23 18:56:37.584834 (sd-merge)[1412]: Merged extensions into '/usr'. Jan 23 18:56:37.604343 systemd[1]: Reload requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 18:56:37.604356 systemd[1]: Reloading... Jan 23 18:56:37.644126 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 23 18:56:37.675762 zram_generator::config[1456]: No configuration found. Jan 23 18:56:37.871240 systemd[1]: Reloading finished in 266 ms. Jan 23 18:56:37.899550 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 18:56:37.906502 systemd[1]: Starting ensure-sysext.service... Jan 23 18:56:37.910781 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 18:56:37.915729 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 18:56:37.918883 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 18:56:37.933808 systemd[1]: Reload requested from client PID 1514 ('systemctl') (unit ensure-sysext.service)... Jan 23 18:56:37.933820 systemd[1]: Reloading... Jan 23 18:56:37.948351 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 18:56:37.963300 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 18:56:37.963537 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 18:56:37.963739 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 18:56:37.964212 systemd-tmpfiles[1516]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 18:56:37.964392 systemd-tmpfiles[1516]: ACLs are not supported, ignoring. Jan 23 18:56:37.964441 systemd-tmpfiles[1516]: ACLs are not supported, ignoring. Jan 23 18:56:37.966616 systemd-tmpfiles[1516]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:56:37.966625 systemd-tmpfiles[1516]: Skipping /boot Jan 23 18:56:37.977406 systemd-tmpfiles[1516]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 18:56:37.978169 systemd-tmpfiles[1516]: Skipping /boot Jan 23 18:56:38.011709 zram_generator::config[1553]: No configuration found. Jan 23 18:56:38.199511 systemd[1]: Reloading finished in 265 ms. Jan 23 18:56:38.223030 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 18:56:38.223431 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 18:56:38.229886 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:56:38.232198 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 18:56:38.235881 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 18:56:38.239870 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 18:56:38.243846 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 18:56:38.253763 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 18:56:38.263845 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:38.264097 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 18:56:38.265151 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 18:56:38.271138 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 18:56:38.277880 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 18:56:38.283456 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 18:56:38.285871 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 18:56:38.285992 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 18:56:38.286141 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 18:56:38.290814 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 23 18:56:38.295480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 18:56:38.295814 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 18:56:38.298788 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 18:56:38.298930 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 18:56:38.303122 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 18:56:38.303270 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 18:56:38.316539 systemd[1]: Finished ensure-sysext.service. Jan 23 18:56:38.320122 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 18:56:38.320366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 18:56:38.325536 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 18:56:38.325955 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 18:56:38.328921 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 18:56:38.392423 systemd-resolved[1619]: Positive Trust Anchors: Jan 23 18:56:38.392437 systemd-resolved[1619]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 18:56:38.392475 systemd-resolved[1619]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 18:56:38.395833 systemd-resolved[1619]: Using system hostname 'ci-4459.2.3-a-2aee31126f'. Jan 23 18:56:38.396929 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 18:56:38.399790 systemd[1]: Reached target network.target - Network. Jan 23 18:56:38.402798 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 18:56:38.407076 augenrules[1652]: No rules Jan 23 18:56:38.407980 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:56:38.408185 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:56:38.440305 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 18:56:38.648814 systemd-networkd[1345]: eth0: Gained IPv6LL Jan 23 18:56:38.650799 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 18:56:38.653971 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 18:56:38.783419 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 18:56:38.786921 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 18:56:41.725025 ldconfig[1301]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 18:56:41.758224 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 18:56:41.761319 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 18:56:41.792172 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 18:56:41.793890 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 18:56:41.795413 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 18:56:41.798825 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 18:56:41.801741 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jan 23 18:56:41.803340 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 18:56:41.805770 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 18:56:41.808727 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 18:56:41.811717 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 18:56:41.811750 systemd[1]: Reached target paths.target - Path Units. Jan 23 18:56:41.813000 systemd[1]: Reached target timers.target - Timer Units. Jan 23 18:56:41.830810 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 18:56:41.833347 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 18:56:41.838295 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 18:56:41.842863 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 18:56:41.845734 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 18:56:41.859125 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 18:56:41.861966 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 18:56:41.864212 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 18:56:41.868350 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 18:56:41.871726 systemd[1]: Reached target basic.target - Basic System. Jan 23 18:56:41.874765 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:56:41.874790 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 18:56:41.876692 systemd[1]: Starting chronyd.service - NTP client/server... Jan 23 18:56:41.880539 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 18:56:41.886803 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 18:56:41.890281 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 18:56:41.894900 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 18:56:41.901478 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 18:56:41.904944 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 18:56:41.907949 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 18:56:41.912808 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jan 23 18:56:41.914957 systemd[1]: hv_fcopy_uio_daemon.service - Hyper-V FCOPY UIO daemon was skipped because of an unmet condition check (ConditionPathExists=/sys/bus/vmbus/devices/eb765408-105f-49b6-b4aa-c123b64d17d4/uio). Jan 23 18:56:41.917791 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 23 18:56:41.921824 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 23 18:56:41.924232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:56:41.932200 jq[1671]: false Jan 23 18:56:41.934382 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 18:56:41.939467 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 18:56:41.946632 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 18:56:41.953794 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 18:56:41.959562 KVP[1674]: KVP starting; pid is:1674 Jan 23 18:56:41.960108 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 18:56:41.972980 kernel: hv_utils: KVP IC version 4.0 Jan 23 18:56:41.972600 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 18:56:41.970161 KVP[1674]: KVP LIC Version: 3.1 Jan 23 18:56:41.975854 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 18:56:41.976274 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 18:56:41.977654 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Refreshing passwd entry cache Jan 23 18:56:41.979864 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 18:56:41.981291 oslogin_cache_refresh[1673]: Refreshing passwd entry cache Jan 23 18:56:41.986516 chronyd[1666]: chronyd version 4.7 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 23 18:56:41.988876 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 18:56:41.992601 extend-filesystems[1672]: Found /dev/nvme0n1p6 Jan 23 18:56:41.997256 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 18:56:42.002196 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 18:56:42.004954 chronyd[1666]: Timezone right/UTC failed leap second check, ignoring Jan 23 18:56:42.005936 chronyd[1666]: Loaded seccomp filter (level 2) Jan 23 18:56:42.008042 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 18:56:42.008169 systemd[1]: Started chronyd.service - NTP client/server. Jan 23 18:56:42.015078 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 18:56:42.016864 jq[1691]: true Jan 23 18:56:42.015265 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 18:56:42.017623 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Failure getting users, quitting Jan 23 18:56:42.017623 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:56:42.017623 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Refreshing group entry cache Jan 23 18:56:42.017218 oslogin_cache_refresh[1673]: Failure getting users, quitting Jan 23 18:56:42.017234 oslogin_cache_refresh[1673]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jan 23 18:56:42.017273 oslogin_cache_refresh[1673]: Refreshing group entry cache Jan 23 18:56:42.022889 extend-filesystems[1672]: Found /dev/nvme0n1p9 Jan 23 18:56:42.026813 extend-filesystems[1672]: Checking size of /dev/nvme0n1p9 Jan 23 18:56:42.035327 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Failure getting groups, quitting Jan 23 18:56:42.035327 google_oslogin_nss_cache[1673]: oslogin_cache_refresh[1673]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:56:42.034758 oslogin_cache_refresh[1673]: Failure getting groups, quitting Jan 23 18:56:42.034768 oslogin_cache_refresh[1673]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jan 23 18:56:42.038263 update_engine[1689]: I20260123 18:56:42.038023 1689 main.cc:92] Flatcar Update Engine starting Jan 23 18:56:42.039972 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jan 23 18:56:42.040163 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jan 23 18:56:42.055374 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 18:56:42.055604 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 18:56:42.057644 extend-filesystems[1672]: Old size kept for /dev/nvme0n1p9 Jan 23 18:56:42.059250 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 18:56:42.059427 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 18:56:42.069365 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 18:56:42.074963 (ntainerd)[1705]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 18:56:42.077755 jq[1704]: true Jan 23 18:56:42.137653 tar[1702]: linux-amd64/LICENSE Jan 23 18:56:42.137869 tar[1702]: linux-amd64/helm Jan 23 18:56:42.438124 tar[1702]: linux-amd64/README.md Jan 23 18:56:42.454909 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 18:56:42.560938 sshd_keygen[1697]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 18:56:42.598394 dbus-daemon[1669]: [system] SELinux support is enabled Jan 23 18:56:42.601625 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 18:56:42.607725 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 18:56:42.616726 bash[1746]: Updated "/home/core/.ssh/authorized_keys" Jan 23 18:56:42.607763 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 18:56:42.610384 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 18:56:42.610402 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 18:56:42.614944 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 18:56:42.622922 update_engine[1689]: I20260123 18:56:42.621871 1689 update_check_scheduler.cc:74] Next update check in 4m50s Jan 23 18:56:42.622607 systemd[1]: Started update-engine.service - Update Engine. Jan 23 18:56:42.624841 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 23 18:56:42.627508 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 18:56:42.641399 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 18:56:42.643998 systemd-logind[1687]: New seat seat0. Jan 23 18:56:42.656445 systemd-logind[1687]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 23 18:56:42.657026 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 18:56:42.676668 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 18:56:42.681889 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 23 18:56:42.698542 coreos-metadata[1668]: Jan 23 18:56:42.698 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 23 18:56:42.704186 coreos-metadata[1668]: Jan 23 18:56:42.704 INFO Fetch successful Jan 23 18:56:42.704186 coreos-metadata[1668]: Jan 23 18:56:42.704 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 23 18:56:42.707742 coreos-metadata[1668]: Jan 23 18:56:42.707 INFO Fetch successful Jan 23 18:56:42.707742 coreos-metadata[1668]: Jan 23 18:56:42.707 INFO Fetching http://168.63.129.16/machine/73adb904-8d89-40b8-88d5-73b63b1f4a0e/56dbf323%2Daaca%2D4ade%2Dba1f%2D900566d98488.%5Fci%2D4459.2.3%2Da%2D2aee31126f?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 23 18:56:42.708888 coreos-metadata[1668]: Jan 23 18:56:42.708 INFO Fetch successful Jan 23 18:56:42.708986 coreos-metadata[1668]: Jan 23 18:56:42.708 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 23 18:56:42.717832 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 18:56:42.718340 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 18:56:42.721549 coreos-metadata[1668]: Jan 23 18:56:42.720 INFO Fetch successful Jan 23 18:56:42.722912 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 18:56:42.744986 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 23 18:56:42.762142 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 18:56:42.772128 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 18:56:42.780922 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 18:56:42.784381 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 18:56:42.788349 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 18:56:42.792021 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 18:56:42.869790 locksmithd[1776]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 18:56:43.566689 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:56:43.577964 (kubelet)[1815]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:56:43.864192 containerd[1705]: time="2026-01-23T18:56:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 18:56:43.865044 containerd[1705]: time="2026-01-23T18:56:43.865012233Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 18:56:43.875096 containerd[1705]: time="2026-01-23T18:56:43.875044117Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.213µs" Jan 23 18:56:43.875096 containerd[1705]: time="2026-01-23T18:56:43.875073815Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 18:56:43.875096 containerd[1705]: time="2026-01-23T18:56:43.875099148Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 18:56:43.876589 containerd[1705]: time="2026-01-23T18:56:43.875237963Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 18:56:43.876589 containerd[1705]: time="2026-01-23T18:56:43.875258684Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 18:56:43.876589 containerd[1705]: time="2026-01-23T18:56:43.875281403Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:56:43.876589 containerd[1705]: time="2026-01-23T18:56:43.875337083Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 18:56:43.876589 containerd[1705]: time="2026-01-23T18:56:43.875348109Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:56:43.876589 containerd[1705]: time="2026-01-23T18:56:43.875568409Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 18:56:43.876589 containerd[1705]: time="2026-01-23T18:56:43.875581746Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:56:43.876589 containerd[1705]: time="2026-01-23T18:56:43.875593673Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 18:56:43.876589 containerd[1705]: time="2026-01-23T18:56:43.875602970Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 18:56:43.876589 containerd[1705]: time="2026-01-23T18:56:43.875660159Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 18:56:43.876589 containerd[1705]: time="2026-01-23T18:56:43.875842811Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:56:43.876891 containerd[1705]: time="2026-01-23T18:56:43.875864266Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 18:56:43.876891 containerd[1705]: time="2026-01-23T18:56:43.875873465Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 18:56:43.876891 containerd[1705]: time="2026-01-23T18:56:43.875895218Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 18:56:43.876891 containerd[1705]: time="2026-01-23T18:56:43.876182512Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 18:56:43.876891 containerd[1705]: time="2026-01-23T18:56:43.876219196Z" level=info msg="metadata content store policy set" policy=shared Jan 23 18:56:43.893305 containerd[1705]: time="2026-01-23T18:56:43.893272404Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 18:56:43.893431 containerd[1705]: time="2026-01-23T18:56:43.893417129Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 18:56:43.893530 containerd[1705]: time="2026-01-23T18:56:43.893520089Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 18:56:43.893574 containerd[1705]: time="2026-01-23T18:56:43.893565620Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 18:56:43.893624 containerd[1705]: time="2026-01-23T18:56:43.893616769Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 18:56:43.893663 containerd[1705]: time="2026-01-23T18:56:43.893656061Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 18:56:43.893787 containerd[1705]: time="2026-01-23T18:56:43.893699712Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 18:56:43.893787 containerd[1705]: time="2026-01-23T18:56:43.893720416Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 18:56:43.893787 containerd[1705]: time="2026-01-23T18:56:43.893735491Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 18:56:43.893787 containerd[1705]: time="2026-01-23T18:56:43.893746830Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 18:56:43.893900 containerd[1705]: time="2026-01-23T18:56:43.893755689Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 18:56:43.893956 containerd[1705]: time="2026-01-23T18:56:43.893941019Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 18:56:43.894098 containerd[1705]: time="2026-01-23T18:56:43.894088415Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 18:56:43.894144 containerd[1705]: time="2026-01-23T18:56:43.894136464Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 18:56:43.894184 containerd[1705]: time="2026-01-23T18:56:43.894177094Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 18:56:43.894220 containerd[1705]: time="2026-01-23T18:56:43.894212341Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 18:56:43.894255 containerd[1705]: time="2026-01-23T18:56:43.894247960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 18:56:43.894290 containerd[1705]: time="2026-01-23T18:56:43.894282424Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 18:56:43.894341 containerd[1705]: time="2026-01-23T18:56:43.894332761Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 18:56:43.894398 containerd[1705]: time="2026-01-23T18:56:43.894376732Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 18:56:43.894436 containerd[1705]: time="2026-01-23T18:56:43.894428455Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 18:56:43.894500 containerd[1705]: time="2026-01-23T18:56:43.894470045Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 18:56:43.894500 containerd[1705]: time="2026-01-23T18:56:43.894481941Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 18:56:43.894593 containerd[1705]: time="2026-01-23T18:56:43.894582635Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 18:56:43.894643 containerd[1705]: time="2026-01-23T18:56:43.894636456Z" level=info msg="Start snapshots syncer" Jan 23 18:56:43.894727 containerd[1705]: time="2026-01-23T18:56:43.894717450Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 18:56:43.895174 containerd[1705]: time="2026-01-23T18:56:43.895088715Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 18:56:43.895174 containerd[1705]: time="2026-01-23T18:56:43.895138121Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 18:56:43.895376 containerd[1705]: time="2026-01-23T18:56:43.895307688Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 18:56:43.895516 containerd[1705]: time="2026-01-23T18:56:43.895501956Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 18:56:43.895643 containerd[1705]: time="2026-01-23T18:56:43.895579098Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 18:56:43.895643 containerd[1705]: time="2026-01-23T18:56:43.895595188Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 18:56:43.895643 containerd[1705]: time="2026-01-23T18:56:43.895605465Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 18:56:43.895643 containerd[1705]: time="2026-01-23T18:56:43.895616972Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 18:56:43.895643 containerd[1705]: time="2026-01-23T18:56:43.895627964Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 18:56:43.895855 containerd[1705]: time="2026-01-23T18:56:43.895784034Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 18:56:43.895855 containerd[1705]: time="2026-01-23T18:56:43.895812963Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 18:56:43.895855 containerd[1705]: time="2026-01-23T18:56:43.895829979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 18:56:43.895855 containerd[1705]: time="2026-01-23T18:56:43.895841271Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 18:56:43.895987 containerd[1705]: time="2026-01-23T18:56:43.895978415Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:56:43.896107 containerd[1705]: time="2026-01-23T18:56:43.896062706Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 18:56:43.896107 containerd[1705]: time="2026-01-23T18:56:43.896074751Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:56:43.896107 containerd[1705]: time="2026-01-23T18:56:43.896084542Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 18:56:43.896107 containerd[1705]: time="2026-01-23T18:56:43.896092779Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 18:56:43.896273 containerd[1705]: time="2026-01-23T18:56:43.896191131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 18:56:43.896273 containerd[1705]: time="2026-01-23T18:56:43.896205921Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 18:56:43.896273 containerd[1705]: time="2026-01-23T18:56:43.896217238Z" level=info msg="runtime interface created" Jan 23 18:56:43.896273 containerd[1705]: time="2026-01-23T18:56:43.896220922Z" level=info msg="created NRI interface" Jan 23 18:56:43.896273 containerd[1705]: time="2026-01-23T18:56:43.896229705Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 18:56:43.896273 containerd[1705]: time="2026-01-23T18:56:43.896239968Z" level=info msg="Connect containerd service" Jan 23 18:56:43.896388 containerd[1705]: time="2026-01-23T18:56:43.896380761Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 18:56:43.897244 containerd[1705]: time="2026-01-23T18:56:43.897221591Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 18:56:44.076931 kubelet[1815]: E0123 18:56:44.076891 1815 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:56:44.078661 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:56:44.078815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:56:44.079176 systemd[1]: kubelet.service: Consumed 876ms CPU time, 258.2M memory peak. Jan 23 18:56:44.365870 waagent[1795]: 2026-01-23T18:56:44.365798Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 Jan 23 18:56:44.367529 waagent[1795]: 2026-01-23T18:56:44.367485Z INFO Daemon Daemon OS: flatcar 4459.2.3 Jan 23 18:56:44.368897 waagent[1795]: 2026-01-23T18:56:44.368861Z INFO Daemon Daemon Python: 3.11.13 Jan 23 18:56:44.370152 waagent[1795]: 2026-01-23T18:56:44.370085Z INFO Daemon Daemon Run daemon Jan 23 18:56:44.372819 waagent[1795]: 2026-01-23T18:56:44.372786Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4459.2.3' Jan 23 18:56:44.376798 waagent[1795]: 2026-01-23T18:56:44.376748Z INFO Daemon Daemon Using waagent for provisioning Jan 23 18:56:44.378244 waagent[1795]: 2026-01-23T18:56:44.378208Z INFO Daemon Daemon Activate resource disk Jan 23 18:56:44.379652 waagent[1795]: 2026-01-23T18:56:44.379607Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 23 18:56:44.424570 waagent[1795]: 2026-01-23T18:56:44.424520Z INFO Daemon Daemon Found device: None Jan 23 18:56:44.427793 waagent[1795]: 2026-01-23T18:56:44.427757Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 23 18:56:44.431766 waagent[1795]: 2026-01-23T18:56:44.431729Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 23 18:56:44.435259 waagent[1795]: 2026-01-23T18:56:44.435220Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 18:56:44.438799 waagent[1795]: 2026-01-23T18:56:44.438761Z INFO Daemon Daemon Running default provisioning handler Jan 23 18:56:44.448142 containerd[1705]: time="2026-01-23T18:56:44.448082545Z" level=info msg="Start subscribing containerd event" Jan 23 18:56:44.448270 containerd[1705]: time="2026-01-23T18:56:44.448240547Z" level=info msg="Start recovering state" Jan 23 18:56:44.448427 containerd[1705]: time="2026-01-23T18:56:44.448379022Z" level=info msg="Start event monitor" Jan 23 18:56:44.448427 containerd[1705]: time="2026-01-23T18:56:44.448294112Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 18:56:44.448479 containerd[1705]: time="2026-01-23T18:56:44.448463167Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 18:56:44.448479 containerd[1705]: time="2026-01-23T18:56:44.448392868Z" level=info msg="Start cni network conf syncer for default" Jan 23 18:56:44.448522 containerd[1705]: time="2026-01-23T18:56:44.448480769Z" level=info msg="Start streaming server" Jan 23 18:56:44.448522 containerd[1705]: time="2026-01-23T18:56:44.448489667Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 18:56:44.448522 containerd[1705]: time="2026-01-23T18:56:44.448497848Z" level=info msg="runtime interface starting up..." Jan 23 18:56:44.448522 containerd[1705]: time="2026-01-23T18:56:44.448503822Z" level=info msg="starting plugins..." Jan 23 18:56:44.448522 containerd[1705]: time="2026-01-23T18:56:44.448517760Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 18:56:44.448995 containerd[1705]: time="2026-01-23T18:56:44.448670362Z" level=info msg="containerd successfully booted in 0.585820s" Jan 23 18:56:44.449220 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 18:56:44.450204 waagent[1795]: 2026-01-23T18:56:44.450144Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 23 18:56:44.455299 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 18:56:44.462768 waagent[1795]: 2026-01-23T18:56:44.462733Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 23 18:56:44.462977 systemd[1]: Startup finished in 3.267s (kernel) + 10.690s (initrd) + 11.600s (userspace) = 25.558s. Jan 23 18:56:44.466512 waagent[1795]: 2026-01-23T18:56:44.466467Z INFO Daemon Daemon cloud-init is enabled: False Jan 23 18:56:44.467300 waagent[1795]: 2026-01-23T18:56:44.466610Z INFO Daemon Daemon Copying ovf-env.xml Jan 23 18:56:44.530281 waagent[1795]: 2026-01-23T18:56:44.530239Z INFO Daemon Daemon Successfully mounted dvd Jan 23 18:56:44.556297 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 23 18:56:44.557968 waagent[1795]: 2026-01-23T18:56:44.557930Z INFO Daemon Daemon Detect protocol endpoint Jan 23 18:56:44.561900 waagent[1795]: 2026-01-23T18:56:44.558085Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 23 18:56:44.561900 waagent[1795]: 2026-01-23T18:56:44.558365Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 23 18:56:44.561900 waagent[1795]: 2026-01-23T18:56:44.558670Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 23 18:56:44.561900 waagent[1795]: 2026-01-23T18:56:44.559063Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 23 18:56:44.561900 waagent[1795]: 2026-01-23T18:56:44.559286Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 23 18:56:44.570025 waagent[1795]: 2026-01-23T18:56:44.569977Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 23 18:56:44.571227 waagent[1795]: 2026-01-23T18:56:44.570564Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 23 18:56:44.571227 waagent[1795]: 2026-01-23T18:56:44.570914Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 23 18:56:44.682568 waagent[1795]: 2026-01-23T18:56:44.682481Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 23 18:56:44.684572 waagent[1795]: 2026-01-23T18:56:44.682651Z INFO Daemon Daemon Forcing an update of the goal state. Jan 23 18:56:44.686285 waagent[1795]: 2026-01-23T18:56:44.686244Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 18:56:44.702403 login[1800]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 23 18:56:44.703021 login[1799]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 18:56:44.708147 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 18:56:44.724532 waagent[1795]: 2026-01-23T18:56:44.709248Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.181 Jan 23 18:56:44.724532 waagent[1795]: 2026-01-23T18:56:44.709776Z INFO Daemon Jan 23 18:56:44.724532 waagent[1795]: 2026-01-23T18:56:44.710022Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 37ba8815-9f94-4418-a6bf-cc49ea92f283 eTag: 11756627959336148365 source: Fabric] Jan 23 18:56:44.724532 waagent[1795]: 2026-01-23T18:56:44.710581Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 23 18:56:44.724532 waagent[1795]: 2026-01-23T18:56:44.711172Z INFO Daemon Jan 23 18:56:44.724532 waagent[1795]: 2026-01-23T18:56:44.711365Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 23 18:56:44.724532 waagent[1795]: 2026-01-23T18:56:44.715105Z INFO Daemon Daemon Downloading artifacts profile blob Jan 23 18:56:44.712141 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 18:56:44.724382 systemd-logind[1687]: New session 1 of user core. Jan 23 18:56:44.754747 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 18:56:44.757000 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 18:56:44.781249 waagent[1795]: 2026-01-23T18:56:44.781208Z INFO Daemon Downloaded certificate {'thumbprint': 'E32422E1D946CD7B21690E1E8F94F6973549A1CF', 'hasPrivateKey': True} Jan 23 18:56:44.781622 (systemd)[1852]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 18:56:44.785388 waagent[1795]: 2026-01-23T18:56:44.784864Z INFO Daemon Fetch goal state completed Jan 23 18:56:44.785080 systemd-logind[1687]: New session c1 of user core. Jan 23 18:56:44.815146 waagent[1795]: 2026-01-23T18:56:44.815097Z INFO Daemon Daemon Starting provisioning Jan 23 18:56:44.815375 waagent[1795]: 2026-01-23T18:56:44.815348Z INFO Daemon Daemon Handle ovf-env.xml. Jan 23 18:56:44.817234 waagent[1795]: 2026-01-23T18:56:44.817148Z INFO Daemon Daemon Set hostname [ci-4459.2.3-a-2aee31126f] Jan 23 18:56:44.819347 waagent[1795]: 2026-01-23T18:56:44.819312Z INFO Daemon Daemon Publish hostname [ci-4459.2.3-a-2aee31126f] Jan 23 18:56:44.823667 waagent[1795]: 2026-01-23T18:56:44.820484Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 23 18:56:44.823667 waagent[1795]: 2026-01-23T18:56:44.820884Z INFO Daemon Daemon Primary interface is [eth0] Jan 23 18:56:44.827847 systemd-networkd[1345]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 18:56:44.827854 systemd-networkd[1345]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 18:56:44.827875 systemd-networkd[1345]: eth0: DHCP lease lost Jan 23 18:56:44.828778 waagent[1795]: 2026-01-23T18:56:44.828669Z INFO Daemon Daemon Create user account if not exists Jan 23 18:56:44.831419 waagent[1795]: 2026-01-23T18:56:44.828928Z INFO Daemon Daemon User core already exists, skip useradd Jan 23 18:56:44.831419 waagent[1795]: 2026-01-23T18:56:44.829202Z INFO Daemon Daemon Configure sudoer Jan 23 18:56:44.838516 waagent[1795]: 2026-01-23T18:56:44.838470Z INFO Daemon Daemon Configure sshd Jan 23 18:56:44.844635 waagent[1795]: 2026-01-23T18:56:44.844593Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 23 18:56:44.851262 waagent[1795]: 2026-01-23T18:56:44.844764Z INFO Daemon Daemon Deploy ssh public key. Jan 23 18:56:44.851366 systemd-networkd[1345]: eth0: DHCPv4 address 10.200.4.15/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 23 18:56:44.939157 systemd[1852]: Queued start job for default target default.target. Jan 23 18:56:44.948388 systemd[1852]: Created slice app.slice - User Application Slice. Jan 23 18:56:44.948418 systemd[1852]: Reached target paths.target - Paths. Jan 23 18:56:44.948448 systemd[1852]: Reached target timers.target - Timers. Jan 23 18:56:44.949301 systemd[1852]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 18:56:44.957147 systemd[1852]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 18:56:44.957289 systemd[1852]: Reached target sockets.target - Sockets. Jan 23 18:56:44.957371 systemd[1852]: Reached target basic.target - Basic System. Jan 23 18:56:44.957461 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 18:56:44.957535 systemd[1852]: Reached target default.target - Main User Target. Jan 23 18:56:44.957558 systemd[1852]: Startup finished in 167ms. Jan 23 18:56:44.958702 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 18:56:45.704053 login[1800]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 23 18:56:45.707904 systemd-logind[1687]: New session 2 of user core. Jan 23 18:56:45.714794 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 18:56:45.940338 waagent[1795]: 2026-01-23T18:56:45.940288Z INFO Daemon Daemon Provisioning complete Jan 23 18:56:45.949436 waagent[1795]: 2026-01-23T18:56:45.949403Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 23 18:56:45.949992 waagent[1795]: 2026-01-23T18:56:45.949962Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 23 18:56:45.950101 waagent[1795]: 2026-01-23T18:56:45.950079Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent Jan 23 18:56:46.058107 waagent[1892]: 2026-01-23T18:56:46.057992Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) Jan 23 18:56:46.058107 waagent[1892]: 2026-01-23T18:56:46.058090Z INFO ExtHandler ExtHandler OS: flatcar 4459.2.3 Jan 23 18:56:46.058383 waagent[1892]: 2026-01-23T18:56:46.058130Z INFO ExtHandler ExtHandler Python: 3.11.13 Jan 23 18:56:46.058383 waagent[1892]: 2026-01-23T18:56:46.058170Z INFO ExtHandler ExtHandler CPU Arch: x86_64 Jan 23 18:56:46.083347 waagent[1892]: 2026-01-23T18:56:46.083298Z INFO ExtHandler ExtHandler Distro: flatcar-4459.2.3; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.13; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.22.0; Jan 23 18:56:46.083478 waagent[1892]: 2026-01-23T18:56:46.083452Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:56:46.083522 waagent[1892]: 2026-01-23T18:56:46.083507Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:56:46.089162 waagent[1892]: 2026-01-23T18:56:46.089114Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 23 18:56:46.092689 waagent[1892]: 2026-01-23T18:56:46.092655Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.181 Jan 23 18:56:46.093050 waagent[1892]: 2026-01-23T18:56:46.093017Z INFO ExtHandler Jan 23 18:56:46.093112 waagent[1892]: 2026-01-23T18:56:46.093071Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e52a6309-8fb0-4346-8a28-39f3557c0e2d eTag: 11756627959336148365 source: Fabric] Jan 23 18:56:46.093298 waagent[1892]: 2026-01-23T18:56:46.093271Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 18:56:46.093625 waagent[1892]: 2026-01-23T18:56:46.093597Z INFO ExtHandler Jan 23 18:56:46.093667 waagent[1892]: 2026-01-23T18:56:46.093637Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 23 18:56:46.096008 waagent[1892]: 2026-01-23T18:56:46.095981Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 18:56:46.164741 waagent[1892]: 2026-01-23T18:56:46.164670Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E32422E1D946CD7B21690E1E8F94F6973549A1CF', 'hasPrivateKey': True} Jan 23 18:56:46.165065 waagent[1892]: 2026-01-23T18:56:46.165038Z INFO ExtHandler Fetch goal state completed Jan 23 18:56:46.180121 waagent[1892]: 2026-01-23T18:56:46.180075Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.4.4-dev (Library: OpenSSL 3.4.4-dev ) Jan 23 18:56:46.184137 waagent[1892]: 2026-01-23T18:56:46.184092Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1892 Jan 23 18:56:46.184240 waagent[1892]: 2026-01-23T18:56:46.184215Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 23 18:56:46.184468 waagent[1892]: 2026-01-23T18:56:46.184445Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** Jan 23 18:56:46.185513 waagent[1892]: 2026-01-23T18:56:46.185478Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] Jan 23 18:56:46.185864 waagent[1892]: 2026-01-23T18:56:46.185830Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4459.2.3', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported Jan 23 18:56:46.185973 waagent[1892]: 2026-01-23T18:56:46.185945Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False Jan 23 18:56:46.186371 waagent[1892]: 2026-01-23T18:56:46.186341Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 23 18:56:46.219052 waagent[1892]: 2026-01-23T18:56:46.219027Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 23 18:56:46.219182 waagent[1892]: 2026-01-23T18:56:46.219160Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 23 18:56:46.224696 waagent[1892]: 2026-01-23T18:56:46.224383Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 23 18:56:46.229613 systemd[1]: Reload requested from client PID 1907 ('systemctl') (unit waagent.service)... Jan 23 18:56:46.229625 systemd[1]: Reloading... Jan 23 18:56:46.295695 zram_generator::config[1946]: No configuration found. Jan 23 18:56:46.486311 systemd[1]: Reloading finished in 256 ms. Jan 23 18:56:46.503701 waagent[1892]: 2026-01-23T18:56:46.501262Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 23 18:56:46.503701 waagent[1892]: 2026-01-23T18:56:46.501402Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 23 18:56:46.590707 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#14 cmd 0x4a status: scsi 0x0 srb 0x20 hv 0xc0000001 Jan 23 18:56:47.019537 waagent[1892]: 2026-01-23T18:56:47.019471Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 23 18:56:47.019838 waagent[1892]: 2026-01-23T18:56:47.019809Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] Jan 23 18:56:47.020501 waagent[1892]: 2026-01-23T18:56:47.020413Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 23 18:56:47.020564 waagent[1892]: 2026-01-23T18:56:47.020525Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:56:47.020611 waagent[1892]: 2026-01-23T18:56:47.020590Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:56:47.020838 waagent[1892]: 2026-01-23T18:56:47.020816Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 23 18:56:47.021237 waagent[1892]: 2026-01-23T18:56:47.021162Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 23 18:56:47.021297 waagent[1892]: 2026-01-23T18:56:47.021270Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 23 18:56:47.021297 waagent[1892]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 23 18:56:47.021297 waagent[1892]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 23 18:56:47.021297 waagent[1892]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 23 18:56:47.021297 waagent[1892]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:56:47.021297 waagent[1892]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:56:47.021297 waagent[1892]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 23 18:56:47.021730 waagent[1892]: 2026-01-23T18:56:47.021691Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 23 18:56:47.021898 waagent[1892]: 2026-01-23T18:56:47.021871Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 23 18:56:47.021961 waagent[1892]: 2026-01-23T18:56:47.021938Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 23 18:56:47.022100 waagent[1892]: 2026-01-23T18:56:47.022058Z INFO EnvHandler ExtHandler Configure routes Jan 23 18:56:47.022201 waagent[1892]: 2026-01-23T18:56:47.022179Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 23 18:56:47.022378 waagent[1892]: 2026-01-23T18:56:47.022341Z INFO EnvHandler ExtHandler Gateway:None Jan 23 18:56:47.022422 waagent[1892]: 2026-01-23T18:56:47.022403Z INFO EnvHandler ExtHandler Routes:None Jan 23 18:56:47.022646 waagent[1892]: 2026-01-23T18:56:47.022590Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 23 18:56:47.022752 waagent[1892]: 2026-01-23T18:56:47.022654Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 23 18:56:47.023181 waagent[1892]: 2026-01-23T18:56:47.023156Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 23 18:56:47.030498 waagent[1892]: 2026-01-23T18:56:47.030464Z INFO ExtHandler ExtHandler Jan 23 18:56:47.030573 waagent[1892]: 2026-01-23T18:56:47.030521Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: db63fef9-8de9-4bfe-8c03-93245a248e8e correlation 2f9d89d1-0aa1-4b77-bbe3-dd84e6e317ed created: 2026-01-23T18:55:51.709259Z] Jan 23 18:56:47.030839 waagent[1892]: 2026-01-23T18:56:47.030813Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 18:56:47.031220 waagent[1892]: 2026-01-23T18:56:47.031197Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 0 ms] Jan 23 18:56:47.070294 waagent[1892]: 2026-01-23T18:56:47.069863Z WARNING ExtHandler ExtHandler Failed to get firewall packets: 'iptables -w -t security -L OUTPUT --zero OUTPUT -nxv' failed: 2 (iptables v1.8.11 (nf_tables): Illegal option `--numeric' with this command Jan 23 18:56:47.070294 waagent[1892]: Try `iptables -h' or 'iptables --help' for more information.) Jan 23 18:56:47.070294 waagent[1892]: 2026-01-23T18:56:47.070222Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9A88DE2E-7BD8-43E1-BD01-4DE982809125;DroppedPackets: -1;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] Jan 23 18:56:47.088264 waagent[1892]: 2026-01-23T18:56:47.088219Z INFO MonitorHandler ExtHandler Network interfaces: Jan 23 18:56:47.088264 waagent[1892]: Executing ['ip', '-a', '-o', 'link']: Jan 23 18:56:47.088264 waagent[1892]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 23 18:56:47.088264 waagent[1892]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d6:ad:3e brd ff:ff:ff:ff:ff:ff\ alias Network Device Jan 23 18:56:47.088264 waagent[1892]: 3: enP30832s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d6:ad:3e brd ff:ff:ff:ff:ff:ff\ altname enP30832p0s0 Jan 23 18:56:47.088264 waagent[1892]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 23 18:56:47.088264 waagent[1892]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 23 18:56:47.088264 waagent[1892]: 2: eth0 inet 10.200.4.15/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 23 18:56:47.088264 waagent[1892]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 23 18:56:47.088264 waagent[1892]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 23 18:56:47.088264 waagent[1892]: 2: eth0 inet6 fe80::20d:3aff:fed6:ad3e/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 23 18:56:47.118949 waagent[1892]: 2026-01-23T18:56:47.118913Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: Jan 23 18:56:47.118949 waagent[1892]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:56:47.118949 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 18:56:47.118949 waagent[1892]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:56:47.118949 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 18:56:47.118949 waagent[1892]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Jan 23 18:56:47.118949 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 18:56:47.118949 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 18:56:47.118949 waagent[1892]: 6 898 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 18:56:47.118949 waagent[1892]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 18:56:47.121984 waagent[1892]: 2026-01-23T18:56:47.121938Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 23 18:56:47.121984 waagent[1892]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:56:47.121984 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 18:56:47.121984 waagent[1892]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 23 18:56:47.121984 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 18:56:47.121984 waagent[1892]: Chain OUTPUT (policy ACCEPT 3 packets, 164 bytes) Jan 23 18:56:47.121984 waagent[1892]: pkts bytes target prot opt in out source destination Jan 23 18:56:47.121984 waagent[1892]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 23 18:56:47.121984 waagent[1892]: 13 1410 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 23 18:56:47.121984 waagent[1892]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 23 18:56:54.329628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 18:56:54.331058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:56:54.814800 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:56:54.820902 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:56:54.853740 kubelet[2044]: E0123 18:56:54.853700 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:56:54.856629 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:56:54.856779 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:56:54.857116 systemd[1]: kubelet.service: Consumed 131ms CPU time, 110.3M memory peak. Jan 23 18:57:04.972725 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 18:57:04.974205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:05.481770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:05.487883 (kubelet)[2059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:57:05.521044 kubelet[2059]: E0123 18:57:05.521011 2059 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:57:05.522608 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:57:05.522764 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:57:05.523089 systemd[1]: kubelet.service: Consumed 125ms CPU time, 110.6M memory peak. Jan 23 18:57:05.790287 chronyd[1666]: Selected source PHC0 Jan 23 18:57:07.830233 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 18:57:07.831528 systemd[1]: Started sshd@0-10.200.4.15:22-10.200.16.10:35882.service - OpenSSH per-connection server daemon (10.200.16.10:35882). Jan 23 18:57:08.567554 sshd[2067]: Accepted publickey for core from 10.200.16.10 port 35882 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:57:08.567971 sshd-session[2067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:08.572575 systemd-logind[1687]: New session 3 of user core. Jan 23 18:57:08.578823 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 18:57:09.108573 systemd[1]: Started sshd@1-10.200.4.15:22-10.200.16.10:35888.service - OpenSSH per-connection server daemon (10.200.16.10:35888). Jan 23 18:57:09.727630 sshd[2073]: Accepted publickey for core from 10.200.16.10 port 35888 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:57:09.728752 sshd-session[2073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:09.733051 systemd-logind[1687]: New session 4 of user core. Jan 23 18:57:09.741808 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 18:57:10.162048 sshd[2076]: Connection closed by 10.200.16.10 port 35888 Jan 23 18:57:10.162775 sshd-session[2073]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:10.166450 systemd-logind[1687]: Session 4 logged out. Waiting for processes to exit. Jan 23 18:57:10.166751 systemd[1]: sshd@1-10.200.4.15:22-10.200.16.10:35888.service: Deactivated successfully. Jan 23 18:57:10.168418 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 18:57:10.169785 systemd-logind[1687]: Removed session 4. Jan 23 18:57:10.271164 systemd[1]: Started sshd@2-10.200.4.15:22-10.200.16.10:38542.service - OpenSSH per-connection server daemon (10.200.16.10:38542). Jan 23 18:57:10.891629 sshd[2082]: Accepted publickey for core from 10.200.16.10 port 38542 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:57:10.892775 sshd-session[2082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:10.897124 systemd-logind[1687]: New session 5 of user core. Jan 23 18:57:10.903827 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 18:57:11.320665 sshd[2085]: Connection closed by 10.200.16.10 port 38542 Jan 23 18:57:11.321904 sshd-session[2082]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:11.325183 systemd-logind[1687]: Session 5 logged out. Waiting for processes to exit. Jan 23 18:57:11.325352 systemd[1]: sshd@2-10.200.4.15:22-10.200.16.10:38542.service: Deactivated successfully. Jan 23 18:57:11.327028 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 18:57:11.328389 systemd-logind[1687]: Removed session 5. Jan 23 18:57:11.441280 systemd[1]: Started sshd@3-10.200.4.15:22-10.200.16.10:38546.service - OpenSSH per-connection server daemon (10.200.16.10:38546). Jan 23 18:57:12.071507 sshd[2091]: Accepted publickey for core from 10.200.16.10 port 38546 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:57:12.071926 sshd-session[2091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:12.076128 systemd-logind[1687]: New session 6 of user core. Jan 23 18:57:12.085807 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 18:57:12.507582 sshd[2094]: Connection closed by 10.200.16.10 port 38546 Jan 23 18:57:12.508852 sshd-session[2091]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:12.511415 systemd[1]: sshd@3-10.200.4.15:22-10.200.16.10:38546.service: Deactivated successfully. Jan 23 18:57:12.513149 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 18:57:12.514548 systemd-logind[1687]: Session 6 logged out. Waiting for processes to exit. Jan 23 18:57:12.515893 systemd-logind[1687]: Removed session 6. Jan 23 18:57:12.615153 systemd[1]: Started sshd@4-10.200.4.15:22-10.200.16.10:38560.service - OpenSSH per-connection server daemon (10.200.16.10:38560). Jan 23 18:57:13.235243 sshd[2100]: Accepted publickey for core from 10.200.16.10 port 38560 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:57:13.236209 sshd-session[2100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:13.240813 systemd-logind[1687]: New session 7 of user core. Jan 23 18:57:13.249823 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 18:57:13.878659 sudo[2104]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 18:57:13.878906 sudo[2104]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:13.910432 sudo[2104]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:14.008898 sshd[2103]: Connection closed by 10.200.16.10 port 38560 Jan 23 18:57:14.009604 sshd-session[2100]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:14.012754 systemd[1]: sshd@4-10.200.4.15:22-10.200.16.10:38560.service: Deactivated successfully. Jan 23 18:57:14.014521 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 18:57:14.016262 systemd-logind[1687]: Session 7 logged out. Waiting for processes to exit. Jan 23 18:57:14.017226 systemd-logind[1687]: Removed session 7. Jan 23 18:57:14.119443 systemd[1]: Started sshd@5-10.200.4.15:22-10.200.16.10:38570.service - OpenSSH per-connection server daemon (10.200.16.10:38570). Jan 23 18:57:14.737866 sshd[2110]: Accepted publickey for core from 10.200.16.10 port 38570 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:57:14.739009 sshd-session[2110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:14.743609 systemd-logind[1687]: New session 8 of user core. Jan 23 18:57:14.745846 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 18:57:15.075467 sudo[2115]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 18:57:15.075730 sudo[2115]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:15.082432 sudo[2115]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:15.086396 sudo[2114]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 18:57:15.086620 sudo[2114]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:15.094243 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 18:57:15.123267 augenrules[2137]: No rules Jan 23 18:57:15.124333 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 18:57:15.124537 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 18:57:15.125372 sudo[2114]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:15.223752 sshd[2113]: Connection closed by 10.200.16.10 port 38570 Jan 23 18:57:15.224840 sshd-session[2110]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:15.227215 systemd[1]: sshd@5-10.200.4.15:22-10.200.16.10:38570.service: Deactivated successfully. Jan 23 18:57:15.228721 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 18:57:15.230203 systemd-logind[1687]: Session 8 logged out. Waiting for processes to exit. Jan 23 18:57:15.230978 systemd-logind[1687]: Removed session 8. Jan 23 18:57:15.331148 systemd[1]: Started sshd@6-10.200.4.15:22-10.200.16.10:38586.service - OpenSSH per-connection server daemon (10.200.16.10:38586). Jan 23 18:57:15.722506 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 18:57:15.724283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:15.959129 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 38586 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:57:15.960258 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:57:15.964752 systemd-logind[1687]: New session 9 of user core. Jan 23 18:57:15.974806 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 18:57:16.228730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:16.231808 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:57:16.263633 kubelet[2158]: E0123 18:57:16.263578 2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:57:16.265069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:57:16.265192 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:57:16.265466 systemd[1]: kubelet.service: Consumed 125ms CPU time, 110.4M memory peak. Jan 23 18:57:16.296009 sudo[2165]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 18:57:16.296241 sudo[2165]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 18:57:17.996182 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 18:57:18.013002 (dockerd)[2183]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 18:57:19.538512 dockerd[2183]: time="2026-01-23T18:57:19.537938917Z" level=info msg="Starting up" Jan 23 18:57:19.539415 dockerd[2183]: time="2026-01-23T18:57:19.539386102Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 18:57:19.548314 dockerd[2183]: time="2026-01-23T18:57:19.548281143Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 18:57:19.576258 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1666333669-merged.mount: Deactivated successfully. Jan 23 18:57:19.694836 dockerd[2183]: time="2026-01-23T18:57:19.694809014Z" level=info msg="Loading containers: start." Jan 23 18:57:19.722710 kernel: Initializing XFRM netlink socket Jan 23 18:57:20.031954 systemd-networkd[1345]: docker0: Link UP Jan 23 18:57:20.046665 dockerd[2183]: time="2026-01-23T18:57:20.046626647Z" level=info msg="Loading containers: done." Jan 23 18:57:20.076026 dockerd[2183]: time="2026-01-23T18:57:20.075991234Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 18:57:20.076161 dockerd[2183]: time="2026-01-23T18:57:20.076065585Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 18:57:20.076161 dockerd[2183]: time="2026-01-23T18:57:20.076141235Z" level=info msg="Initializing buildkit" Jan 23 18:57:20.125439 dockerd[2183]: time="2026-01-23T18:57:20.125392452Z" level=info msg="Completed buildkit initialization" Jan 23 18:57:20.131605 dockerd[2183]: time="2026-01-23T18:57:20.131571979Z" level=info msg="Daemon has completed initialization" Jan 23 18:57:20.131798 dockerd[2183]: time="2026-01-23T18:57:20.131715345Z" level=info msg="API listen on /run/docker.sock" Jan 23 18:57:20.131787 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 18:57:21.116149 containerd[1705]: time="2026-01-23T18:57:21.116102542Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 18:57:21.988868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78426743.mount: Deactivated successfully. Jan 23 18:57:23.116463 containerd[1705]: time="2026-01-23T18:57:23.116414942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:23.118985 containerd[1705]: time="2026-01-23T18:57:23.118953271Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068081" Jan 23 18:57:23.122618 containerd[1705]: time="2026-01-23T18:57:23.122575858Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:23.130034 containerd[1705]: time="2026-01-23T18:57:23.129744888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:23.130623 containerd[1705]: time="2026-01-23T18:57:23.130400600Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.014261406s" Jan 23 18:57:23.130623 containerd[1705]: time="2026-01-23T18:57:23.130434822Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 23 18:57:23.131039 containerd[1705]: time="2026-01-23T18:57:23.131021656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 18:57:24.431824 containerd[1705]: time="2026-01-23T18:57:24.431776489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:24.434390 containerd[1705]: time="2026-01-23T18:57:24.434263955Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162448" Jan 23 18:57:24.437195 containerd[1705]: time="2026-01-23T18:57:24.437164813Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:24.443596 containerd[1705]: time="2026-01-23T18:57:24.443562708Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:24.444395 containerd[1705]: time="2026-01-23T18:57:24.444368285Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.313259301s" Jan 23 18:57:24.444481 containerd[1705]: time="2026-01-23T18:57:24.444467433Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 23 18:57:24.444963 containerd[1705]: time="2026-01-23T18:57:24.444937562Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 18:57:25.351706 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 23 18:57:25.544903 containerd[1705]: time="2026-01-23T18:57:25.544855757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:25.547942 containerd[1705]: time="2026-01-23T18:57:25.547811086Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725935" Jan 23 18:57:25.550720 containerd[1705]: time="2026-01-23T18:57:25.550697966Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:25.555243 containerd[1705]: time="2026-01-23T18:57:25.555212111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:25.556370 containerd[1705]: time="2026-01-23T18:57:25.555933126Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.11096536s" Jan 23 18:57:25.556370 containerd[1705]: time="2026-01-23T18:57:25.555963251Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 23 18:57:25.556577 containerd[1705]: time="2026-01-23T18:57:25.556550269Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 18:57:26.473178 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 23 18:57:26.475951 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:26.497007 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1803818083.mount: Deactivated successfully. Jan 23 18:57:27.026827 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:27.031981 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 18:57:27.068156 kubelet[2467]: E0123 18:57:27.068123 2467 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 18:57:27.071252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 18:57:27.071471 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 18:57:27.072103 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.5M memory peak. Jan 23 18:57:27.319217 containerd[1705]: time="2026-01-23T18:57:27.319105883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:27.322466 containerd[1705]: time="2026-01-23T18:57:27.322341473Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965301" Jan 23 18:57:27.326400 containerd[1705]: time="2026-01-23T18:57:27.326373366Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:27.330355 containerd[1705]: time="2026-01-23T18:57:27.330327022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:27.330741 containerd[1705]: time="2026-01-23T18:57:27.330716697Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.774134348s" Jan 23 18:57:27.330818 containerd[1705]: time="2026-01-23T18:57:27.330805411Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 23 18:57:27.331359 containerd[1705]: time="2026-01-23T18:57:27.331332494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 18:57:27.875814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1053266253.mount: Deactivated successfully. Jan 23 18:57:27.912936 update_engine[1689]: I20260123 18:57:27.912889 1689 update_attempter.cc:509] Updating boot flags... Jan 23 18:57:28.981095 containerd[1705]: time="2026-01-23T18:57:28.981043388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:28.983697 containerd[1705]: time="2026-01-23T18:57:28.983578643Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Jan 23 18:57:28.986873 containerd[1705]: time="2026-01-23T18:57:28.986834076Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:28.991287 containerd[1705]: time="2026-01-23T18:57:28.991245203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:28.992143 containerd[1705]: time="2026-01-23T18:57:28.992007484Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 1.660645616s" Jan 23 18:57:28.992143 containerd[1705]: time="2026-01-23T18:57:28.992039619Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 23 18:57:28.992567 containerd[1705]: time="2026-01-23T18:57:28.992537796Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 18:57:29.537895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408707823.mount: Deactivated successfully. Jan 23 18:57:29.557882 containerd[1705]: time="2026-01-23T18:57:29.557842208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:29.562916 containerd[1705]: time="2026-01-23T18:57:29.562802539Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Jan 23 18:57:29.567369 containerd[1705]: time="2026-01-23T18:57:29.567345036Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:29.572785 containerd[1705]: time="2026-01-23T18:57:29.572152368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:29.572785 containerd[1705]: time="2026-01-23T18:57:29.572625869Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 580.061257ms" Jan 23 18:57:29.572785 containerd[1705]: time="2026-01-23T18:57:29.572699779Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 23 18:57:29.573386 containerd[1705]: time="2026-01-23T18:57:29.573369280Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 18:57:30.160402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788246323.mount: Deactivated successfully. Jan 23 18:57:32.661946 containerd[1705]: time="2026-01-23T18:57:32.661896313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:32.666128 containerd[1705]: time="2026-01-23T18:57:32.665908546Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166822" Jan 23 18:57:32.669688 containerd[1705]: time="2026-01-23T18:57:32.669653530Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:32.676669 containerd[1705]: time="2026-01-23T18:57:32.676638701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:32.677410 containerd[1705]: time="2026-01-23T18:57:32.677383663Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.103931916s" Jan 23 18:57:32.677464 containerd[1705]: time="2026-01-23T18:57:32.677418548Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 23 18:57:35.223706 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:35.223871 systemd[1]: kubelet.service: Consumed 125ms CPU time, 108.5M memory peak. Jan 23 18:57:35.226218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:35.253508 systemd[1]: Reload requested from client PID 2639 ('systemctl') (unit session-9.scope)... Jan 23 18:57:35.253523 systemd[1]: Reloading... Jan 23 18:57:35.359703 zram_generator::config[2689]: No configuration found. Jan 23 18:57:35.581337 systemd[1]: Reloading finished in 327 ms. Jan 23 18:57:35.612384 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 18:57:35.612461 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 18:57:35.612787 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:35.612842 systemd[1]: kubelet.service: Consumed 80ms CPU time, 81.8M memory peak. Jan 23 18:57:35.614218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:36.308840 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:36.312984 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:57:36.350695 kubelet[2753]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:57:36.350695 kubelet[2753]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:57:36.350695 kubelet[2753]: I0123 18:57:36.350509 2753 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:57:36.707268 kubelet[2753]: I0123 18:57:36.707165 2753 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 18:57:36.707268 kubelet[2753]: I0123 18:57:36.707190 2753 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:57:36.710777 kubelet[2753]: I0123 18:57:36.710759 2753 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 18:57:36.710835 kubelet[2753]: I0123 18:57:36.710780 2753 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:57:36.711020 kubelet[2753]: I0123 18:57:36.711006 2753 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:57:36.723708 kubelet[2753]: E0123 18:57:36.723599 2753 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:57:36.725618 kubelet[2753]: I0123 18:57:36.725580 2753 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:57:36.729425 kubelet[2753]: I0123 18:57:36.729401 2753 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:57:36.731591 kubelet[2753]: I0123 18:57:36.731573 2753 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 18:57:36.732379 kubelet[2753]: I0123 18:57:36.732350 2753 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:57:36.732535 kubelet[2753]: I0123 18:57:36.732379 2753 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-a-2aee31126f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:57:36.732649 kubelet[2753]: I0123 18:57:36.732540 2753 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:57:36.732649 kubelet[2753]: I0123 18:57:36.732549 2753 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 18:57:36.732649 kubelet[2753]: I0123 18:57:36.732628 2753 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 18:57:36.739161 kubelet[2753]: I0123 18:57:36.739138 2753 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:36.739603 kubelet[2753]: I0123 18:57:36.739287 2753 kubelet.go:475] "Attempting to sync node with API server" Jan 23 18:57:36.739603 kubelet[2753]: I0123 18:57:36.739299 2753 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:57:36.739603 kubelet[2753]: I0123 18:57:36.739319 2753 kubelet.go:387] "Adding apiserver pod source" Jan 23 18:57:36.739603 kubelet[2753]: I0123 18:57:36.739339 2753 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:57:36.744484 kubelet[2753]: E0123 18:57:36.744450 2753 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-a-2aee31126f&limit=500&resourceVersion=0\": dial tcp 10.200.4.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:57:36.744803 kubelet[2753]: E0123 18:57:36.744718 2753 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:57:36.745130 kubelet[2753]: I0123 18:57:36.745110 2753 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:57:36.745500 kubelet[2753]: I0123 18:57:36.745484 2753 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:57:36.745536 kubelet[2753]: I0123 18:57:36.745514 2753 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 18:57:36.745559 kubelet[2753]: W0123 18:57:36.745554 2753 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 18:57:36.749065 kubelet[2753]: I0123 18:57:36.748764 2753 server.go:1262] "Started kubelet" Jan 23 18:57:36.750120 kubelet[2753]: I0123 18:57:36.749507 2753 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:57:36.754911 kubelet[2753]: E0123 18:57:36.753479 2753 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.3-a-2aee31126f.188d712aae0705b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.3-a-2aee31126f,UID:ci-4459.2.3-a-2aee31126f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.3-a-2aee31126f,},FirstTimestamp:2026-01-23 18:57:36.748733875 +0000 UTC m=+0.432475311,LastTimestamp:2026-01-23 18:57:36.748733875 +0000 UTC m=+0.432475311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.3-a-2aee31126f,}" Jan 23 18:57:36.755886 kubelet[2753]: I0123 18:57:36.755372 2753 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:57:36.757116 kubelet[2753]: I0123 18:57:36.757094 2753 server.go:310] "Adding debug handlers to kubelet server" Jan 23 18:57:36.758081 kubelet[2753]: I0123 18:57:36.758059 2753 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 18:57:36.758278 kubelet[2753]: E0123 18:57:36.758260 2753 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.3-a-2aee31126f\" not found" Jan 23 18:57:36.760553 kubelet[2753]: I0123 18:57:36.760525 2753 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:57:36.760621 kubelet[2753]: I0123 18:57:36.760568 2753 reconciler.go:29] "Reconciler: start to sync state" Jan 23 18:57:36.762178 kubelet[2753]: I0123 18:57:36.762148 2753 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:57:36.762307 kubelet[2753]: I0123 18:57:36.762295 2753 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 18:57:36.762487 kubelet[2753]: I0123 18:57:36.762477 2753 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:57:36.762789 kubelet[2753]: I0123 18:57:36.762777 2753 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:57:36.764834 kubelet[2753]: E0123 18:57:36.764810 2753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-a-2aee31126f?timeout=10s\": dial tcp 10.200.4.15:6443: connect: connection refused" interval="200ms" Jan 23 18:57:36.765379 kubelet[2753]: I0123 18:57:36.765360 2753 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:57:36.767694 kubelet[2753]: I0123 18:57:36.765710 2753 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:57:36.767694 kubelet[2753]: E0123 18:57:36.766484 2753 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:57:36.770527 kubelet[2753]: E0123 18:57:36.770513 2753 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:57:36.770609 kubelet[2753]: I0123 18:57:36.770563 2753 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:57:36.783284 kubelet[2753]: I0123 18:57:36.783269 2753 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:57:36.783361 kubelet[2753]: I0123 18:57:36.783354 2753 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:57:36.783403 kubelet[2753]: I0123 18:57:36.783398 2753 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:36.791822 kubelet[2753]: I0123 18:57:36.791762 2753 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 18:57:36.792808 kubelet[2753]: I0123 18:57:36.792787 2753 policy_none.go:49] "None policy: Start" Jan 23 18:57:36.792868 kubelet[2753]: I0123 18:57:36.792817 2753 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 18:57:36.792868 kubelet[2753]: I0123 18:57:36.792827 2753 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 18:57:36.794278 kubelet[2753]: I0123 18:57:36.794260 2753 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 18:57:36.794371 kubelet[2753]: I0123 18:57:36.794364 2753 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 18:57:36.794523 kubelet[2753]: I0123 18:57:36.794516 2753 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 18:57:36.796553 kubelet[2753]: E0123 18:57:36.795727 2753 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:57:36.796807 kubelet[2753]: E0123 18:57:36.796789 2753 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:57:36.858973 kubelet[2753]: E0123 18:57:36.858951 2753 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4459.2.3-a-2aee31126f\" not found" Jan 23 18:57:36.879102 kubelet[2753]: I0123 18:57:36.879087 2753 policy_none.go:47] "Start" Jan 23 18:57:36.882966 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 18:57:36.895937 kubelet[2753]: E0123 18:57:36.895918 2753 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 18:57:36.896659 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 18:57:36.899462 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 18:57:36.909334 kubelet[2753]: E0123 18:57:36.909217 2753 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:57:36.909623 kubelet[2753]: I0123 18:57:36.909614 2753 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:57:36.909719 kubelet[2753]: I0123 18:57:36.909695 2753 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:57:36.910145 kubelet[2753]: I0123 18:57:36.909893 2753 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:57:36.911404 kubelet[2753]: E0123 18:57:36.911384 2753 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:57:36.911466 kubelet[2753]: E0123 18:57:36.911431 2753 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459.2.3-a-2aee31126f\" not found" Jan 23 18:57:36.966173 kubelet[2753]: E0123 18:57:36.966095 2753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-a-2aee31126f?timeout=10s\": dial tcp 10.200.4.15:6443: connect: connection refused" interval="400ms" Jan 23 18:57:37.011423 kubelet[2753]: I0123 18:57:37.011403 2753 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.011745 kubelet[2753]: E0123 18:57:37.011705 2753 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.15:6443/api/v1/nodes\": dial tcp 10.200.4.15:6443: connect: connection refused" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.147714 systemd[1]: Created slice kubepods-burstable-podf8510ce093d9b98b8beb5144e3a7747c.slice - libcontainer container kubepods-burstable-podf8510ce093d9b98b8beb5144e3a7747c.slice. Jan 23 18:57:37.158163 kubelet[2753]: E0123 18:57:37.158130 2753 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-2aee31126f\" not found" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.162356 kubelet[2753]: I0123 18:57:37.162339 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54b81dd7db0e4aea49286baff36c851a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-a-2aee31126f\" (UID: \"54b81dd7db0e4aea49286baff36c851a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.162419 kubelet[2753]: I0123 18:57:37.162366 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8510ce093d9b98b8beb5144e3a7747c-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-a-2aee31126f\" (UID: \"f8510ce093d9b98b8beb5144e3a7747c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.162419 kubelet[2753]: I0123 18:57:37.162383 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8510ce093d9b98b8beb5144e3a7747c-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-a-2aee31126f\" (UID: \"f8510ce093d9b98b8beb5144e3a7747c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.162419 kubelet[2753]: I0123 18:57:37.162406 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8510ce093d9b98b8beb5144e3a7747c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-a-2aee31126f\" (UID: \"f8510ce093d9b98b8beb5144e3a7747c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.162495 kubelet[2753]: I0123 18:57:37.162425 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54b81dd7db0e4aea49286baff36c851a-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-a-2aee31126f\" (UID: \"54b81dd7db0e4aea49286baff36c851a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.162495 kubelet[2753]: I0123 18:57:37.162443 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54b81dd7db0e4aea49286baff36c851a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-a-2aee31126f\" (UID: \"54b81dd7db0e4aea49286baff36c851a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.162495 kubelet[2753]: I0123 18:57:37.162460 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54b81dd7db0e4aea49286baff36c851a-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-a-2aee31126f\" (UID: \"54b81dd7db0e4aea49286baff36c851a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.162495 kubelet[2753]: I0123 18:57:37.162477 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54b81dd7db0e4aea49286baff36c851a-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-a-2aee31126f\" (UID: \"54b81dd7db0e4aea49286baff36c851a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.190050 systemd[1]: Created slice kubepods-burstable-pod54b81dd7db0e4aea49286baff36c851a.slice - libcontainer container kubepods-burstable-pod54b81dd7db0e4aea49286baff36c851a.slice. Jan 23 18:57:37.191755 kubelet[2753]: E0123 18:57:37.191731 2753 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-2aee31126f\" not found" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.213866 kubelet[2753]: I0123 18:57:37.213853 2753 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.214173 kubelet[2753]: E0123 18:57:37.214149 2753 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.15:6443/api/v1/nodes\": dial tcp 10.200.4.15:6443: connect: connection refused" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.242781 systemd[1]: Created slice kubepods-burstable-pod539f19da22611d3a0cbc71c394d73d31.slice - libcontainer container kubepods-burstable-pod539f19da22611d3a0cbc71c394d73d31.slice. Jan 23 18:57:37.244777 kubelet[2753]: E0123 18:57:37.244752 2753 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-2aee31126f\" not found" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.263510 kubelet[2753]: I0123 18:57:37.263386 2753 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/539f19da22611d3a0cbc71c394d73d31-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-a-2aee31126f\" (UID: \"539f19da22611d3a0cbc71c394d73d31\") " pod="kube-system/kube-scheduler-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.366767 kubelet[2753]: E0123 18:57:37.366735 2753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-a-2aee31126f?timeout=10s\": dial tcp 10.200.4.15:6443: connect: connection refused" interval="800ms" Jan 23 18:57:37.485376 containerd[1705]: time="2026-01-23T18:57:37.485326647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-a-2aee31126f,Uid:f8510ce093d9b98b8beb5144e3a7747c,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:37.499096 containerd[1705]: time="2026-01-23T18:57:37.498569559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-a-2aee31126f,Uid:54b81dd7db0e4aea49286baff36c851a,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:37.616294 kubelet[2753]: I0123 18:57:37.615986 2753 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.616294 kubelet[2753]: E0123 18:57:37.616273 2753 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.15:6443/api/v1/nodes\": dial tcp 10.200.4.15:6443: connect: connection refused" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:37.625553 kubelet[2753]: E0123 18:57:37.625530 2753 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 18:57:37.784710 containerd[1705]: time="2026-01-23T18:57:37.784599878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-a-2aee31126f,Uid:539f19da22611d3a0cbc71c394d73d31,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:37.808023 kubelet[2753]: E0123 18:57:37.807905 2753 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459.2.3-a-2aee31126f.188d712aae0705b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459.2.3-a-2aee31126f,UID:ci-4459.2.3-a-2aee31126f,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459.2.3-a-2aee31126f,},FirstTimestamp:2026-01-23 18:57:36.748733875 +0000 UTC m=+0.432475311,LastTimestamp:2026-01-23 18:57:36.748733875 +0000 UTC m=+0.432475311,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459.2.3-a-2aee31126f,}" Jan 23 18:57:37.914880 kubelet[2753]: E0123 18:57:37.914849 2753 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 18:57:38.167246 kubelet[2753]: E0123 18:57:38.167148 2753 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459.2.3-a-2aee31126f?timeout=10s\": dial tcp 10.200.4.15:6443: connect: connection refused" interval="1.6s" Jan 23 18:57:38.233795 kubelet[2753]: E0123 18:57:38.233764 2753 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 18:57:38.259026 kubelet[2753]: E0123 18:57:38.258998 2753 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459.2.3-a-2aee31126f&limit=500&resourceVersion=0\": dial tcp 10.200.4.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 18:57:38.264162 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742622884.mount: Deactivated successfully. Jan 23 18:57:38.287354 containerd[1705]: time="2026-01-23T18:57:38.287318215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:38.301561 containerd[1705]: time="2026-01-23T18:57:38.301374589Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 23 18:57:38.304766 containerd[1705]: time="2026-01-23T18:57:38.304738667Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:38.313653 containerd[1705]: time="2026-01-23T18:57:38.313525153Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:38.316632 containerd[1705]: time="2026-01-23T18:57:38.316525399Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:57:38.321045 containerd[1705]: time="2026-01-23T18:57:38.321017165Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:38.330589 containerd[1705]: time="2026-01-23T18:57:38.330559733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 18:57:38.331082 containerd[1705]: time="2026-01-23T18:57:38.331058676Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 841.792879ms" Jan 23 18:57:38.333625 containerd[1705]: time="2026-01-23T18:57:38.333488351Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 18:57:38.336907 containerd[1705]: time="2026-01-23T18:57:38.336878017Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 834.757643ms" Jan 23 18:57:38.356115 containerd[1705]: time="2026-01-23T18:57:38.356074462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 555.025608ms" Jan 23 18:57:38.401313 containerd[1705]: time="2026-01-23T18:57:38.401275899Z" level=info msg="connecting to shim 7e8a5c9be7c52c2a57d05d948ac3e35b8473b8159b0c7441f6390ba4cc195298" address="unix:///run/containerd/s/f408ae59793b20d59c2a547b0788b4a2d321f096cc9bfcc5120ed830f9700bea" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:38.405063 containerd[1705]: time="2026-01-23T18:57:38.404990905Z" level=info msg="connecting to shim 080074e4c0cca88b8ce24fbf5a90b0bec3d8f7db45acf16e7f0117be6873d9a7" address="unix:///run/containerd/s/c9f843d9f5e7ecd02556f46c875d41dc80669cb12306a369c68c303e2e2a5a9e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:38.418720 kubelet[2753]: I0123 18:57:38.418562 2753 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:38.419649 kubelet[2753]: E0123 18:57:38.419266 2753 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.15:6443/api/v1/nodes\": dial tcp 10.200.4.15:6443: connect: connection refused" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:38.431838 systemd[1]: Started cri-containerd-7e8a5c9be7c52c2a57d05d948ac3e35b8473b8159b0c7441f6390ba4cc195298.scope - libcontainer container 7e8a5c9be7c52c2a57d05d948ac3e35b8473b8159b0c7441f6390ba4cc195298. Jan 23 18:57:38.435594 systemd[1]: Started cri-containerd-080074e4c0cca88b8ce24fbf5a90b0bec3d8f7db45acf16e7f0117be6873d9a7.scope - libcontainer container 080074e4c0cca88b8ce24fbf5a90b0bec3d8f7db45acf16e7f0117be6873d9a7. Jan 23 18:57:38.443315 containerd[1705]: time="2026-01-23T18:57:38.443286289Z" level=info msg="connecting to shim 66ddbf1606b40eaff4289ab7c0846829e34f9f6271cfd2e1d16349c2b1ee4e78" address="unix:///run/containerd/s/1cd4cb7e89c33e02c773f54e7003cfd41bceeecce90c910af316a32b58657288" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:38.471807 systemd[1]: Started cri-containerd-66ddbf1606b40eaff4289ab7c0846829e34f9f6271cfd2e1d16349c2b1ee4e78.scope - libcontainer container 66ddbf1606b40eaff4289ab7c0846829e34f9f6271cfd2e1d16349c2b1ee4e78. Jan 23 18:57:38.506768 containerd[1705]: time="2026-01-23T18:57:38.506735672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459.2.3-a-2aee31126f,Uid:f8510ce093d9b98b8beb5144e3a7747c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e8a5c9be7c52c2a57d05d948ac3e35b8473b8159b0c7441f6390ba4cc195298\"" Jan 23 18:57:38.523421 containerd[1705]: time="2026-01-23T18:57:38.523169542Z" level=info msg="CreateContainer within sandbox \"7e8a5c9be7c52c2a57d05d948ac3e35b8473b8159b0c7441f6390ba4cc195298\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 18:57:38.525292 containerd[1705]: time="2026-01-23T18:57:38.525225398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459.2.3-a-2aee31126f,Uid:54b81dd7db0e4aea49286baff36c851a,Namespace:kube-system,Attempt:0,} returns sandbox id \"080074e4c0cca88b8ce24fbf5a90b0bec3d8f7db45acf16e7f0117be6873d9a7\"" Jan 23 18:57:38.532300 containerd[1705]: time="2026-01-23T18:57:38.532273496Z" level=info msg="CreateContainer within sandbox \"080074e4c0cca88b8ce24fbf5a90b0bec3d8f7db45acf16e7f0117be6873d9a7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 18:57:38.553247 containerd[1705]: time="2026-01-23T18:57:38.553215548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459.2.3-a-2aee31126f,Uid:539f19da22611d3a0cbc71c394d73d31,Namespace:kube-system,Attempt:0,} returns sandbox id \"66ddbf1606b40eaff4289ab7c0846829e34f9f6271cfd2e1d16349c2b1ee4e78\"" Jan 23 18:57:38.560917 containerd[1705]: time="2026-01-23T18:57:38.560888371Z" level=info msg="CreateContainer within sandbox \"66ddbf1606b40eaff4289ab7c0846829e34f9f6271cfd2e1d16349c2b1ee4e78\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 18:57:38.563364 containerd[1705]: time="2026-01-23T18:57:38.563341201Z" level=info msg="Container 5217e7b0d9424f58ec2529e2188d0594dd35fa62f1cbc429057714faa64dab65: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:38.573281 containerd[1705]: time="2026-01-23T18:57:38.573249447Z" level=info msg="Container d9f7b803190216f1ca6bcfbf3e6688d467d00b29a8c5d0a6426c72a15b166d65: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:38.628624 containerd[1705]: time="2026-01-23T18:57:38.628599825Z" level=info msg="CreateContainer within sandbox \"7e8a5c9be7c52c2a57d05d948ac3e35b8473b8159b0c7441f6390ba4cc195298\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5217e7b0d9424f58ec2529e2188d0594dd35fa62f1cbc429057714faa64dab65\"" Jan 23 18:57:38.629119 containerd[1705]: time="2026-01-23T18:57:38.629082999Z" level=info msg="StartContainer for \"5217e7b0d9424f58ec2529e2188d0594dd35fa62f1cbc429057714faa64dab65\"" Jan 23 18:57:38.630317 containerd[1705]: time="2026-01-23T18:57:38.630258599Z" level=info msg="connecting to shim 5217e7b0d9424f58ec2529e2188d0594dd35fa62f1cbc429057714faa64dab65" address="unix:///run/containerd/s/f408ae59793b20d59c2a547b0788b4a2d321f096cc9bfcc5120ed830f9700bea" protocol=ttrpc version=3 Jan 23 18:57:38.637988 containerd[1705]: time="2026-01-23T18:57:38.637910124Z" level=info msg="Container 18a1292b41e1a3b673dfcb484da7de0a997f249808c7a1719a2d24bc7602ed30: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:38.647973 containerd[1705]: time="2026-01-23T18:57:38.647945210Z" level=info msg="CreateContainer within sandbox \"080074e4c0cca88b8ce24fbf5a90b0bec3d8f7db45acf16e7f0117be6873d9a7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d9f7b803190216f1ca6bcfbf3e6688d467d00b29a8c5d0a6426c72a15b166d65\"" Jan 23 18:57:38.648447 containerd[1705]: time="2026-01-23T18:57:38.648423187Z" level=info msg="StartContainer for \"d9f7b803190216f1ca6bcfbf3e6688d467d00b29a8c5d0a6426c72a15b166d65\"" Jan 23 18:57:38.649227 containerd[1705]: time="2026-01-23T18:57:38.649193196Z" level=info msg="connecting to shim d9f7b803190216f1ca6bcfbf3e6688d467d00b29a8c5d0a6426c72a15b166d65" address="unix:///run/containerd/s/c9f843d9f5e7ecd02556f46c875d41dc80669cb12306a369c68c303e2e2a5a9e" protocol=ttrpc version=3 Jan 23 18:57:38.652040 systemd[1]: Started cri-containerd-5217e7b0d9424f58ec2529e2188d0594dd35fa62f1cbc429057714faa64dab65.scope - libcontainer container 5217e7b0d9424f58ec2529e2188d0594dd35fa62f1cbc429057714faa64dab65. Jan 23 18:57:38.658751 containerd[1705]: time="2026-01-23T18:57:38.658667513Z" level=info msg="CreateContainer within sandbox \"66ddbf1606b40eaff4289ab7c0846829e34f9f6271cfd2e1d16349c2b1ee4e78\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"18a1292b41e1a3b673dfcb484da7de0a997f249808c7a1719a2d24bc7602ed30\"" Jan 23 18:57:38.659201 containerd[1705]: time="2026-01-23T18:57:38.659162469Z" level=info msg="StartContainer for \"18a1292b41e1a3b673dfcb484da7de0a997f249808c7a1719a2d24bc7602ed30\"" Jan 23 18:57:38.661758 containerd[1705]: time="2026-01-23T18:57:38.661733496Z" level=info msg="connecting to shim 18a1292b41e1a3b673dfcb484da7de0a997f249808c7a1719a2d24bc7602ed30" address="unix:///run/containerd/s/1cd4cb7e89c33e02c773f54e7003cfd41bceeecce90c910af316a32b58657288" protocol=ttrpc version=3 Jan 23 18:57:38.672937 systemd[1]: Started cri-containerd-d9f7b803190216f1ca6bcfbf3e6688d467d00b29a8c5d0a6426c72a15b166d65.scope - libcontainer container d9f7b803190216f1ca6bcfbf3e6688d467d00b29a8c5d0a6426c72a15b166d65. Jan 23 18:57:38.683834 systemd[1]: Started cri-containerd-18a1292b41e1a3b673dfcb484da7de0a997f249808c7a1719a2d24bc7602ed30.scope - libcontainer container 18a1292b41e1a3b673dfcb484da7de0a997f249808c7a1719a2d24bc7602ed30. Jan 23 18:57:38.737939 containerd[1705]: time="2026-01-23T18:57:38.737915559Z" level=info msg="StartContainer for \"5217e7b0d9424f58ec2529e2188d0594dd35fa62f1cbc429057714faa64dab65\" returns successfully" Jan 23 18:57:38.748379 containerd[1705]: time="2026-01-23T18:57:38.748099911Z" level=info msg="StartContainer for \"d9f7b803190216f1ca6bcfbf3e6688d467d00b29a8c5d0a6426c72a15b166d65\" returns successfully" Jan 23 18:57:38.768960 kubelet[2753]: E0123 18:57:38.768924 2753 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 18:57:38.782135 containerd[1705]: time="2026-01-23T18:57:38.782114477Z" level=info msg="StartContainer for \"18a1292b41e1a3b673dfcb484da7de0a997f249808c7a1719a2d24bc7602ed30\" returns successfully" Jan 23 18:57:38.808232 kubelet[2753]: E0123 18:57:38.807925 2753 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-2aee31126f\" not found" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:38.812654 kubelet[2753]: E0123 18:57:38.812637 2753 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-2aee31126f\" not found" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:38.815091 kubelet[2753]: E0123 18:57:38.814957 2753 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-2aee31126f\" not found" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:39.816783 kubelet[2753]: E0123 18:57:39.816756 2753 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-2aee31126f\" not found" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:39.817118 kubelet[2753]: E0123 18:57:39.816952 2753 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-2aee31126f\" not found" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:40.022205 kubelet[2753]: I0123 18:57:40.022179 2753 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:40.931398 kubelet[2753]: E0123 18:57:40.931210 2753 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459.2.3-a-2aee31126f\" not found" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:41.143947 kubelet[2753]: E0123 18:57:41.143914 2753 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459.2.3-a-2aee31126f\" not found" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:41.166753 kubelet[2753]: I0123 18:57:41.166567 2753 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:41.259412 kubelet[2753]: I0123 18:57:41.259384 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:41.264129 kubelet[2753]: E0123 18:57:41.264101 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-a-2aee31126f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:41.264234 kubelet[2753]: I0123 18:57:41.264139 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:41.266163 kubelet[2753]: E0123 18:57:41.265587 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459.2.3-a-2aee31126f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:41.266163 kubelet[2753]: I0123 18:57:41.266097 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:41.268099 kubelet[2753]: E0123 18:57:41.268074 2753 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-a-2aee31126f\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:41.748016 kubelet[2753]: I0123 18:57:41.747984 2753 apiserver.go:52] "Watching apiserver" Jan 23 18:57:41.761069 kubelet[2753]: I0123 18:57:41.761045 2753 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:57:42.295154 kubelet[2753]: I0123 18:57:42.295126 2753 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:42.303082 kubelet[2753]: I0123 18:57:42.303044 2753 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 18:57:43.347299 systemd[1]: Reload requested from client PID 3036 ('systemctl') (unit session-9.scope)... Jan 23 18:57:43.347312 systemd[1]: Reloading... Jan 23 18:57:43.426721 zram_generator::config[3079]: No configuration found. Jan 23 18:57:43.642839 systemd[1]: Reloading finished in 295 ms. Jan 23 18:57:43.665908 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:43.682969 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 18:57:43.683179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:43.683219 systemd[1]: kubelet.service: Consumed 743ms CPU time, 125.1M memory peak. Jan 23 18:57:43.684983 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 18:57:44.915366 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 18:57:44.923940 (kubelet)[3150]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 18:57:44.960693 kubelet[3150]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 18:57:44.960693 kubelet[3150]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 18:57:44.960693 kubelet[3150]: I0123 18:57:44.960060 3150 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 18:57:44.964521 kubelet[3150]: I0123 18:57:44.964499 3150 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 18:57:44.964521 kubelet[3150]: I0123 18:57:44.964515 3150 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 18:57:44.964621 kubelet[3150]: I0123 18:57:44.964537 3150 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 18:57:44.964621 kubelet[3150]: I0123 18:57:44.964546 3150 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 18:57:44.964782 kubelet[3150]: I0123 18:57:44.964769 3150 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 18:57:44.965692 kubelet[3150]: I0123 18:57:44.965648 3150 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 18:57:44.969705 kubelet[3150]: I0123 18:57:44.968829 3150 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 18:57:44.972630 kubelet[3150]: I0123 18:57:44.972619 3150 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 18:57:44.974426 kubelet[3150]: I0123 18:57:44.974405 3150 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 18:57:44.974666 kubelet[3150]: I0123 18:57:44.974650 3150 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 18:57:44.974862 kubelet[3150]: I0123 18:57:44.974721 3150 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459.2.3-a-2aee31126f","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 18:57:44.974969 kubelet[3150]: I0123 18:57:44.974862 3150 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 18:57:44.974969 kubelet[3150]: I0123 18:57:44.974871 3150 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 18:57:44.974969 kubelet[3150]: I0123 18:57:44.974893 3150 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 18:57:44.975626 kubelet[3150]: I0123 18:57:44.975613 3150 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:44.976709 kubelet[3150]: I0123 18:57:44.975775 3150 kubelet.go:475] "Attempting to sync node with API server" Jan 23 18:57:44.976709 kubelet[3150]: I0123 18:57:44.975786 3150 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 18:57:44.976709 kubelet[3150]: I0123 18:57:44.975805 3150 kubelet.go:387] "Adding apiserver pod source" Jan 23 18:57:44.976709 kubelet[3150]: I0123 18:57:44.975817 3150 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 18:57:44.982159 kubelet[3150]: I0123 18:57:44.982142 3150 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 18:57:44.982725 kubelet[3150]: I0123 18:57:44.982711 3150 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 18:57:44.982803 kubelet[3150]: I0123 18:57:44.982797 3150 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 18:57:44.984842 kubelet[3150]: I0123 18:57:44.984832 3150 server.go:1262] "Started kubelet" Jan 23 18:57:44.986388 kubelet[3150]: I0123 18:57:44.986373 3150 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 18:57:44.999728 kubelet[3150]: E0123 18:57:44.999673 3150 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 18:57:45.000712 kubelet[3150]: I0123 18:57:44.999825 3150 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 18:57:45.000712 kubelet[3150]: I0123 18:57:45.000610 3150 server.go:310] "Adding debug handlers to kubelet server" Jan 23 18:57:45.001116 kubelet[3150]: I0123 18:57:45.001093 3150 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 18:57:45.002032 kubelet[3150]: I0123 18:57:45.002013 3150 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 18:57:45.002237 kubelet[3150]: I0123 18:57:45.002227 3150 reconciler.go:29] "Reconciler: start to sync state" Jan 23 18:57:45.010294 kubelet[3150]: I0123 18:57:45.010264 3150 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 18:57:45.010363 kubelet[3150]: I0123 18:57:45.010306 3150 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 18:57:45.010446 kubelet[3150]: I0123 18:57:45.010433 3150 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 18:57:45.017706 kubelet[3150]: I0123 18:57:45.017218 3150 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 18:57:45.017706 kubelet[3150]: I0123 18:57:45.017362 3150 factory.go:223] Registration of the systemd container factory successfully Jan 23 18:57:45.017706 kubelet[3150]: I0123 18:57:45.017429 3150 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 18:57:45.017706 kubelet[3150]: I0123 18:57:45.017639 3150 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 18:57:45.018933 kubelet[3150]: I0123 18:57:45.018917 3150 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 18:57:45.019007 kubelet[3150]: I0123 18:57:45.019000 3150 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 18:57:45.019054 kubelet[3150]: I0123 18:57:45.019049 3150 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 18:57:45.019131 kubelet[3150]: E0123 18:57:45.019119 3150 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 18:57:45.028136 kubelet[3150]: I0123 18:57:45.028124 3150 factory.go:223] Registration of the containerd container factory successfully Jan 23 18:57:45.080642 kubelet[3150]: I0123 18:57:45.079093 3150 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 18:57:45.080642 kubelet[3150]: I0123 18:57:45.079118 3150 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 18:57:45.080642 kubelet[3150]: I0123 18:57:45.079134 3150 state_mem.go:36] "Initialized new in-memory state store" Jan 23 18:57:45.080642 kubelet[3150]: I0123 18:57:45.079249 3150 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 18:57:45.080642 kubelet[3150]: I0123 18:57:45.079270 3150 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 18:57:45.080642 kubelet[3150]: I0123 18:57:45.079285 3150 policy_none.go:49] "None policy: Start" Jan 23 18:57:45.080642 kubelet[3150]: I0123 18:57:45.079294 3150 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 18:57:45.080642 kubelet[3150]: I0123 18:57:45.079302 3150 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 18:57:45.080642 kubelet[3150]: I0123 18:57:45.079475 3150 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 18:57:45.080642 kubelet[3150]: I0123 18:57:45.079484 3150 policy_none.go:47] "Start" Jan 23 18:57:45.084627 kubelet[3150]: E0123 18:57:45.084580 3150 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 18:57:45.085358 kubelet[3150]: I0123 18:57:45.084968 3150 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 18:57:45.085358 kubelet[3150]: I0123 18:57:45.084983 3150 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 18:57:45.085358 kubelet[3150]: I0123 18:57:45.085325 3150 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 18:57:45.092168 kubelet[3150]: E0123 18:57:45.092078 3150 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 18:57:45.120052 kubelet[3150]: I0123 18:57:45.119853 3150 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.121363 kubelet[3150]: I0123 18:57:45.120527 3150 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.121363 kubelet[3150]: I0123 18:57:45.120882 3150 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.139565 kubelet[3150]: I0123 18:57:45.139442 3150 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 18:57:45.139565 kubelet[3150]: I0123 18:57:45.139446 3150 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 18:57:45.139565 kubelet[3150]: E0123 18:57:45.139521 3150 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-a-2aee31126f\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.140087 kubelet[3150]: I0123 18:57:45.140072 3150 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 18:57:45.196855 kubelet[3150]: I0123 18:57:45.196795 3150 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.204253 kubelet[3150]: I0123 18:57:45.204021 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8510ce093d9b98b8beb5144e3a7747c-k8s-certs\") pod \"kube-apiserver-ci-4459.2.3-a-2aee31126f\" (UID: \"f8510ce093d9b98b8beb5144e3a7747c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.204372 kubelet[3150]: I0123 18:57:45.204052 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/54b81dd7db0e4aea49286baff36c851a-ca-certs\") pod \"kube-controller-manager-ci-4459.2.3-a-2aee31126f\" (UID: \"54b81dd7db0e4aea49286baff36c851a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.204591 kubelet[3150]: I0123 18:57:45.204559 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/54b81dd7db0e4aea49286baff36c851a-flexvolume-dir\") pod \"kube-controller-manager-ci-4459.2.3-a-2aee31126f\" (UID: \"54b81dd7db0e4aea49286baff36c851a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.205151 kubelet[3150]: I0123 18:57:45.205112 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/54b81dd7db0e4aea49286baff36c851a-k8s-certs\") pod \"kube-controller-manager-ci-4459.2.3-a-2aee31126f\" (UID: \"54b81dd7db0e4aea49286baff36c851a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.206640 kubelet[3150]: I0123 18:57:45.205207 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/54b81dd7db0e4aea49286baff36c851a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459.2.3-a-2aee31126f\" (UID: \"54b81dd7db0e4aea49286baff36c851a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.206640 kubelet[3150]: I0123 18:57:45.205253 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/539f19da22611d3a0cbc71c394d73d31-kubeconfig\") pod \"kube-scheduler-ci-4459.2.3-a-2aee31126f\" (UID: \"539f19da22611d3a0cbc71c394d73d31\") " pod="kube-system/kube-scheduler-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.206747 kubelet[3150]: I0123 18:57:45.206632 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8510ce093d9b98b8beb5144e3a7747c-ca-certs\") pod \"kube-apiserver-ci-4459.2.3-a-2aee31126f\" (UID: \"f8510ce093d9b98b8beb5144e3a7747c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.206796 kubelet[3150]: I0123 18:57:45.206769 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8510ce093d9b98b8beb5144e3a7747c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459.2.3-a-2aee31126f\" (UID: \"f8510ce093d9b98b8beb5144e3a7747c\") " pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.206823 kubelet[3150]: I0123 18:57:45.206803 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/54b81dd7db0e4aea49286baff36c851a-kubeconfig\") pod \"kube-controller-manager-ci-4459.2.3-a-2aee31126f\" (UID: \"54b81dd7db0e4aea49286baff36c851a\") " pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.207257 kubelet[3150]: I0123 18:57:45.207080 3150 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.207257 kubelet[3150]: I0123 18:57:45.207127 3150 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459.2.3-a-2aee31126f" Jan 23 18:57:45.977329 kubelet[3150]: I0123 18:57:45.977037 3150 apiserver.go:52] "Watching apiserver" Jan 23 18:57:46.002311 kubelet[3150]: I0123 18:57:46.002286 3150 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 18:57:46.059171 kubelet[3150]: I0123 18:57:46.058618 3150 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:46.059171 kubelet[3150]: I0123 18:57:46.059115 3150 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:46.070448 kubelet[3150]: I0123 18:57:46.070424 3150 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 18:57:46.070534 kubelet[3150]: E0123 18:57:46.070470 3150 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459.2.3-a-2aee31126f\" already exists" pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:46.077080 kubelet[3150]: I0123 18:57:46.076761 3150 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 23 18:57:46.077080 kubelet[3150]: E0123 18:57:46.076810 3150 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459.2.3-a-2aee31126f\" already exists" pod="kube-system/kube-scheduler-ci-4459.2.3-a-2aee31126f" Jan 23 18:57:46.086829 kubelet[3150]: I0123 18:57:46.086772 3150 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459.2.3-a-2aee31126f" podStartSLOduration=1.086758926 podStartE2EDuration="1.086758926s" podCreationTimestamp="2026-01-23 18:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:46.077023076 +0000 UTC m=+1.149523712" watchObservedRunningTime="2026-01-23 18:57:46.086758926 +0000 UTC m=+1.159259691" Jan 23 18:57:46.098885 kubelet[3150]: I0123 18:57:46.098606 3150 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459.2.3-a-2aee31126f" podStartSLOduration=4.098594069 podStartE2EDuration="4.098594069s" podCreationTimestamp="2026-01-23 18:57:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:46.086994143 +0000 UTC m=+1.159494779" watchObservedRunningTime="2026-01-23 18:57:46.098594069 +0000 UTC m=+1.171094700" Jan 23 18:57:46.098885 kubelet[3150]: I0123 18:57:46.098691 3150 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459.2.3-a-2aee31126f" podStartSLOduration=1.0986862099999999 podStartE2EDuration="1.09868621s" podCreationTimestamp="2026-01-23 18:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:46.098339338 +0000 UTC m=+1.170839977" watchObservedRunningTime="2026-01-23 18:57:46.09868621 +0000 UTC m=+1.171186831" Jan 23 18:57:47.300221 waagent[1892]: 2026-01-23T18:57:47.300180Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 23 18:57:47.308217 waagent[1892]: 2026-01-23T18:57:47.308180Z INFO ExtHandler Jan 23 18:57:47.308314 waagent[1892]: 2026-01-23T18:57:47.308242Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 23 18:57:47.311653 waagent[1892]: 2026-01-23T18:57:47.311624Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 23 18:57:47.377138 waagent[1892]: 2026-01-23T18:57:47.377086Z INFO ExtHandler Downloaded certificate {'thumbprint': 'E32422E1D946CD7B21690E1E8F94F6973549A1CF', 'hasPrivateKey': True} Jan 23 18:57:47.377463 waagent[1892]: 2026-01-23T18:57:47.377435Z INFO ExtHandler Fetch goal state completed Jan 23 18:57:47.377746 waagent[1892]: 2026-01-23T18:57:47.377719Z INFO ExtHandler ExtHandler Jan 23 18:57:47.377790 waagent[1892]: 2026-01-23T18:57:47.377772Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 638ceb96-0a42-4c4a-95b0-d74e74b8df14 correlation 2f9d89d1-0aa1-4b77-bbe3-dd84e6e317ed created: 2026-01-23T18:57:43.645439Z] Jan 23 18:57:47.377993 waagent[1892]: 2026-01-23T18:57:47.377971Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 23 18:57:47.378344 waagent[1892]: 2026-01-23T18:57:47.378322Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 23 18:57:49.010775 kubelet[3150]: I0123 18:57:49.010744 3150 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 18:57:49.011196 containerd[1705]: time="2026-01-23T18:57:49.011156122Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 18:57:49.011929 kubelet[3150]: I0123 18:57:49.011906 3150 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 18:57:49.776861 systemd[1]: Created slice kubepods-besteffort-pod2e9bb408_550e_40d3_857f_0c22cb6d49cc.slice - libcontainer container kubepods-besteffort-pod2e9bb408_550e_40d3_857f_0c22cb6d49cc.slice. Jan 23 18:57:49.835407 kubelet[3150]: I0123 18:57:49.835378 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e9bb408-550e-40d3-857f-0c22cb6d49cc-kube-proxy\") pod \"kube-proxy-pvmhw\" (UID: \"2e9bb408-550e-40d3-857f-0c22cb6d49cc\") " pod="kube-system/kube-proxy-pvmhw" Jan 23 18:57:49.835407 kubelet[3150]: I0123 18:57:49.835410 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e9bb408-550e-40d3-857f-0c22cb6d49cc-lib-modules\") pod \"kube-proxy-pvmhw\" (UID: \"2e9bb408-550e-40d3-857f-0c22cb6d49cc\") " pod="kube-system/kube-proxy-pvmhw" Jan 23 18:57:49.835540 kubelet[3150]: I0123 18:57:49.835426 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e9bb408-550e-40d3-857f-0c22cb6d49cc-xtables-lock\") pod \"kube-proxy-pvmhw\" (UID: \"2e9bb408-550e-40d3-857f-0c22cb6d49cc\") " pod="kube-system/kube-proxy-pvmhw" Jan 23 18:57:49.835540 kubelet[3150]: I0123 18:57:49.835443 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mkwb\" (UniqueName: \"kubernetes.io/projected/2e9bb408-550e-40d3-857f-0c22cb6d49cc-kube-api-access-9mkwb\") pod \"kube-proxy-pvmhw\" (UID: \"2e9bb408-550e-40d3-857f-0c22cb6d49cc\") " pod="kube-system/kube-proxy-pvmhw" Jan 23 18:57:50.090821 containerd[1705]: time="2026-01-23T18:57:50.090414707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvmhw,Uid:2e9bb408-550e-40d3-857f-0c22cb6d49cc,Namespace:kube-system,Attempt:0,}" Jan 23 18:57:50.154214 containerd[1705]: time="2026-01-23T18:57:50.154109999Z" level=info msg="connecting to shim 4c0c1ef331c58fa2a423dc1570d8508bfd342debab571559584fa4b5b74cb94c" address="unix:///run/containerd/s/fc7a6aa1c55dbd4c8fb8b6d999e0626937d9a967f37a96d3addf47e84cbf0b3c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:50.180831 systemd[1]: Started cri-containerd-4c0c1ef331c58fa2a423dc1570d8508bfd342debab571559584fa4b5b74cb94c.scope - libcontainer container 4c0c1ef331c58fa2a423dc1570d8508bfd342debab571559584fa4b5b74cb94c. Jan 23 18:57:50.207438 containerd[1705]: time="2026-01-23T18:57:50.207398010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-pvmhw,Uid:2e9bb408-550e-40d3-857f-0c22cb6d49cc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c0c1ef331c58fa2a423dc1570d8508bfd342debab571559584fa4b5b74cb94c\"" Jan 23 18:57:50.216979 containerd[1705]: time="2026-01-23T18:57:50.216944549Z" level=info msg="CreateContainer within sandbox \"4c0c1ef331c58fa2a423dc1570d8508bfd342debab571559584fa4b5b74cb94c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 18:57:50.244040 containerd[1705]: time="2026-01-23T18:57:50.243980710Z" level=info msg="Container 7804e1d928f74bf4b56ba210ced9b6ee0d18ae0bc781e4ba612365776ac487ef: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:50.257576 systemd[1]: Created slice kubepods-besteffort-pod2a4e7637_8ac6_4ee8_9178_74658a9e38f3.slice - libcontainer container kubepods-besteffort-pod2a4e7637_8ac6_4ee8_9178_74658a9e38f3.slice. Jan 23 18:57:50.275746 containerd[1705]: time="2026-01-23T18:57:50.275697247Z" level=info msg="CreateContainer within sandbox \"4c0c1ef331c58fa2a423dc1570d8508bfd342debab571559584fa4b5b74cb94c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7804e1d928f74bf4b56ba210ced9b6ee0d18ae0bc781e4ba612365776ac487ef\"" Jan 23 18:57:50.277367 containerd[1705]: time="2026-01-23T18:57:50.277339172Z" level=info msg="StartContainer for \"7804e1d928f74bf4b56ba210ced9b6ee0d18ae0bc781e4ba612365776ac487ef\"" Jan 23 18:57:50.280915 containerd[1705]: time="2026-01-23T18:57:50.280797921Z" level=info msg="connecting to shim 7804e1d928f74bf4b56ba210ced9b6ee0d18ae0bc781e4ba612365776ac487ef" address="unix:///run/containerd/s/fc7a6aa1c55dbd4c8fb8b6d999e0626937d9a967f37a96d3addf47e84cbf0b3c" protocol=ttrpc version=3 Jan 23 18:57:50.302811 systemd[1]: Started cri-containerd-7804e1d928f74bf4b56ba210ced9b6ee0d18ae0bc781e4ba612365776ac487ef.scope - libcontainer container 7804e1d928f74bf4b56ba210ced9b6ee0d18ae0bc781e4ba612365776ac487ef. Jan 23 18:57:50.338808 kubelet[3150]: I0123 18:57:50.338782 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xssg\" (UniqueName: \"kubernetes.io/projected/2a4e7637-8ac6-4ee8-9178-74658a9e38f3-kube-api-access-5xssg\") pod \"tigera-operator-65cdcdfd6d-cvllq\" (UID: \"2a4e7637-8ac6-4ee8-9178-74658a9e38f3\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-cvllq" Jan 23 18:57:50.339067 kubelet[3150]: I0123 18:57:50.338919 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2a4e7637-8ac6-4ee8-9178-74658a9e38f3-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-cvllq\" (UID: \"2a4e7637-8ac6-4ee8-9178-74658a9e38f3\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-cvllq" Jan 23 18:57:50.356965 containerd[1705]: time="2026-01-23T18:57:50.356870716Z" level=info msg="StartContainer for \"7804e1d928f74bf4b56ba210ced9b6ee0d18ae0bc781e4ba612365776ac487ef\" returns successfully" Jan 23 18:57:50.570131 containerd[1705]: time="2026-01-23T18:57:50.570094517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-cvllq,Uid:2a4e7637-8ac6-4ee8-9178-74658a9e38f3,Namespace:tigera-operator,Attempt:0,}" Jan 23 18:57:50.604582 containerd[1705]: time="2026-01-23T18:57:50.604507445Z" level=info msg="connecting to shim 28d8c84bdf64336204197f32dd3a11ac38b436ae0cb8dc11c407d78d732188a1" address="unix:///run/containerd/s/b5a53958fc60162f25e0f2fe4d1bfa9dfedd4e6319890d8d5101926e9231c71b" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:57:50.629837 systemd[1]: Started cri-containerd-28d8c84bdf64336204197f32dd3a11ac38b436ae0cb8dc11c407d78d732188a1.scope - libcontainer container 28d8c84bdf64336204197f32dd3a11ac38b436ae0cb8dc11c407d78d732188a1. Jan 23 18:57:50.675257 containerd[1705]: time="2026-01-23T18:57:50.675150075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-cvllq,Uid:2a4e7637-8ac6-4ee8-9178-74658a9e38f3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"28d8c84bdf64336204197f32dd3a11ac38b436ae0cb8dc11c407d78d732188a1\"" Jan 23 18:57:50.677971 containerd[1705]: time="2026-01-23T18:57:50.677950220Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 18:57:50.948551 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1479255082.mount: Deactivated successfully. Jan 23 18:57:51.080626 kubelet[3150]: I0123 18:57:51.080431 3150 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-pvmhw" podStartSLOduration=2.080416776 podStartE2EDuration="2.080416776s" podCreationTimestamp="2026-01-23 18:57:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:57:51.080290847 +0000 UTC m=+6.152791511" watchObservedRunningTime="2026-01-23 18:57:51.080416776 +0000 UTC m=+6.152917410" Jan 23 18:57:52.088499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189139918.mount: Deactivated successfully. Jan 23 18:57:53.379792 containerd[1705]: time="2026-01-23T18:57:53.379752878Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:53.382635 containerd[1705]: time="2026-01-23T18:57:53.382608146Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 23 18:57:53.385785 containerd[1705]: time="2026-01-23T18:57:53.385743015Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:53.389171 containerd[1705]: time="2026-01-23T18:57:53.389132053Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:57:53.389905 containerd[1705]: time="2026-01-23T18:57:53.389546417Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.711566514s" Jan 23 18:57:53.389905 containerd[1705]: time="2026-01-23T18:57:53.389575899Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 23 18:57:53.400160 containerd[1705]: time="2026-01-23T18:57:53.400130571Z" level=info msg="CreateContainer within sandbox \"28d8c84bdf64336204197f32dd3a11ac38b436ae0cb8dc11c407d78d732188a1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 18:57:53.424966 waagent[1892]: 2026-01-23T18:57:53.424931Z INFO ExtHandler Jan 23 18:57:53.425233 waagent[1892]: 2026-01-23T18:57:53.425031Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 4116cc02-b629-4847-97c4-fc77fc8a05f9 eTag: 1628975947076975384 source: Fabric] Jan 23 18:57:53.425381 waagent[1892]: 2026-01-23T18:57:53.425348Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 23 18:57:53.430700 containerd[1705]: time="2026-01-23T18:57:53.430642273Z" level=info msg="Container 714a8feced7aec327b5df54e5ce5c024e94cfca641c28ade6894a7a1c3166fa4: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:57:53.445217 containerd[1705]: time="2026-01-23T18:57:53.445187617Z" level=info msg="CreateContainer within sandbox \"28d8c84bdf64336204197f32dd3a11ac38b436ae0cb8dc11c407d78d732188a1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"714a8feced7aec327b5df54e5ce5c024e94cfca641c28ade6894a7a1c3166fa4\"" Jan 23 18:57:53.445700 containerd[1705]: time="2026-01-23T18:57:53.445560029Z" level=info msg="StartContainer for \"714a8feced7aec327b5df54e5ce5c024e94cfca641c28ade6894a7a1c3166fa4\"" Jan 23 18:57:53.446629 containerd[1705]: time="2026-01-23T18:57:53.446605622Z" level=info msg="connecting to shim 714a8feced7aec327b5df54e5ce5c024e94cfca641c28ade6894a7a1c3166fa4" address="unix:///run/containerd/s/b5a53958fc60162f25e0f2fe4d1bfa9dfedd4e6319890d8d5101926e9231c71b" protocol=ttrpc version=3 Jan 23 18:57:53.465840 systemd[1]: Started cri-containerd-714a8feced7aec327b5df54e5ce5c024e94cfca641c28ade6894a7a1c3166fa4.scope - libcontainer container 714a8feced7aec327b5df54e5ce5c024e94cfca641c28ade6894a7a1c3166fa4. Jan 23 18:57:53.493146 containerd[1705]: time="2026-01-23T18:57:53.493105578Z" level=info msg="StartContainer for \"714a8feced7aec327b5df54e5ce5c024e94cfca641c28ade6894a7a1c3166fa4\" returns successfully" Jan 23 18:57:56.603500 kubelet[3150]: I0123 18:57:56.603433 3150 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-cvllq" podStartSLOduration=3.889810473 podStartE2EDuration="6.603039575s" podCreationTimestamp="2026-01-23 18:57:50 +0000 UTC" firstStartedPulling="2026-01-23 18:57:50.676923695 +0000 UTC m=+5.749424317" lastFinishedPulling="2026-01-23 18:57:53.390152794 +0000 UTC m=+8.462653419" observedRunningTime="2026-01-23 18:57:54.088113391 +0000 UTC m=+9.160614029" watchObservedRunningTime="2026-01-23 18:57:56.603039575 +0000 UTC m=+11.675540207" Jan 23 18:57:59.127485 sudo[2165]: pam_unix(sudo:session): session closed for user root Jan 23 18:57:59.226832 sshd[2152]: Connection closed by 10.200.16.10 port 38586 Jan 23 18:57:59.229064 sshd-session[2146]: pam_unix(sshd:session): session closed for user core Jan 23 18:57:59.233482 systemd[1]: sshd@6-10.200.4.15:22-10.200.16.10:38586.service: Deactivated successfully. Jan 23 18:57:59.237880 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 18:57:59.238065 systemd[1]: session-9.scope: Consumed 3.832s CPU time, 231M memory peak. Jan 23 18:57:59.240603 systemd-logind[1687]: Session 9 logged out. Waiting for processes to exit. Jan 23 18:57:59.243953 systemd-logind[1687]: Removed session 9. Jan 23 18:58:04.067745 systemd[1]: Created slice kubepods-besteffort-pod70a14ea9_c6f4_4aef_9421_236d6582279c.slice - libcontainer container kubepods-besteffort-pod70a14ea9_c6f4_4aef_9421_236d6582279c.slice. Jan 23 18:58:04.124334 kubelet[3150]: I0123 18:58:04.124293 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/70a14ea9-c6f4-4aef-9421-236d6582279c-tigera-ca-bundle\") pod \"calico-typha-87988899d-6rjvg\" (UID: \"70a14ea9-c6f4-4aef-9421-236d6582279c\") " pod="calico-system/calico-typha-87988899d-6rjvg" Jan 23 18:58:04.124334 kubelet[3150]: I0123 18:58:04.124333 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/70a14ea9-c6f4-4aef-9421-236d6582279c-typha-certs\") pod \"calico-typha-87988899d-6rjvg\" (UID: \"70a14ea9-c6f4-4aef-9421-236d6582279c\") " pod="calico-system/calico-typha-87988899d-6rjvg" Jan 23 18:58:04.124644 kubelet[3150]: I0123 18:58:04.124352 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzzzc\" (UniqueName: \"kubernetes.io/projected/70a14ea9-c6f4-4aef-9421-236d6582279c-kube-api-access-qzzzc\") pod \"calico-typha-87988899d-6rjvg\" (UID: \"70a14ea9-c6f4-4aef-9421-236d6582279c\") " pod="calico-system/calico-typha-87988899d-6rjvg" Jan 23 18:58:04.230732 systemd[1]: Created slice kubepods-besteffort-pod50413c8b_10cd_40ff_94c8_e1b15a6204fa.slice - libcontainer container kubepods-besteffort-pod50413c8b_10cd_40ff_94c8_e1b15a6204fa.slice. Jan 23 18:58:04.325836 kubelet[3150]: I0123 18:58:04.325721 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/50413c8b-10cd-40ff-94c8-e1b15a6204fa-node-certs\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.325836 kubelet[3150]: I0123 18:58:04.325766 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50413c8b-10cd-40ff-94c8-e1b15a6204fa-tigera-ca-bundle\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.325836 kubelet[3150]: I0123 18:58:04.325784 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/50413c8b-10cd-40ff-94c8-e1b15a6204fa-var-run-calico\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.325836 kubelet[3150]: I0123 18:58:04.325801 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/50413c8b-10cd-40ff-94c8-e1b15a6204fa-cni-net-dir\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.325836 kubelet[3150]: I0123 18:58:04.325816 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/50413c8b-10cd-40ff-94c8-e1b15a6204fa-var-lib-calico\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.326060 kubelet[3150]: I0123 18:58:04.325835 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/50413c8b-10cd-40ff-94c8-e1b15a6204fa-cni-log-dir\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.326060 kubelet[3150]: I0123 18:58:04.325853 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/50413c8b-10cd-40ff-94c8-e1b15a6204fa-cni-bin-dir\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.326060 kubelet[3150]: I0123 18:58:04.325870 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/50413c8b-10cd-40ff-94c8-e1b15a6204fa-policysync\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.326060 kubelet[3150]: I0123 18:58:04.325885 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs6jd\" (UniqueName: \"kubernetes.io/projected/50413c8b-10cd-40ff-94c8-e1b15a6204fa-kube-api-access-qs6jd\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.326060 kubelet[3150]: I0123 18:58:04.325902 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/50413c8b-10cd-40ff-94c8-e1b15a6204fa-flexvol-driver-host\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.326142 kubelet[3150]: I0123 18:58:04.325919 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50413c8b-10cd-40ff-94c8-e1b15a6204fa-lib-modules\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.326142 kubelet[3150]: I0123 18:58:04.325937 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50413c8b-10cd-40ff-94c8-e1b15a6204fa-xtables-lock\") pod \"calico-node-8x4kv\" (UID: \"50413c8b-10cd-40ff-94c8-e1b15a6204fa\") " pod="calico-system/calico-node-8x4kv" Jan 23 18:58:04.378874 containerd[1705]: time="2026-01-23T18:58:04.378833147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-87988899d-6rjvg,Uid:70a14ea9-c6f4-4aef-9421-236d6582279c,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:04.420052 kubelet[3150]: E0123 18:58:04.418807 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:58:04.428343 containerd[1705]: time="2026-01-23T18:58:04.428157688Z" level=info msg="connecting to shim 60587d53fd068fa79e39419167c76bc2486fc3660d56e4a925b0db7adbfee442" address="unix:///run/containerd/s/59d643189be74947da35ffc330946802984ba85c6371997fec8112f37bc821ea" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:04.433085 kubelet[3150]: E0123 18:58:04.433062 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.433085 kubelet[3150]: W0123 18:58:04.433085 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.433247 kubelet[3150]: E0123 18:58:04.433107 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.436738 kubelet[3150]: E0123 18:58:04.435758 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.436856 kubelet[3150]: W0123 18:58:04.436839 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.437021 kubelet[3150]: E0123 18:58:04.436916 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.442524 kubelet[3150]: E0123 18:58:04.442507 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.442702 kubelet[3150]: W0123 18:58:04.442602 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.442702 kubelet[3150]: E0123 18:58:04.442621 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.443488 kubelet[3150]: E0123 18:58:04.443145 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.443488 kubelet[3150]: W0123 18:58:04.443160 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.443488 kubelet[3150]: E0123 18:58:04.443176 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.443755 kubelet[3150]: E0123 18:58:04.443745 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.443877 kubelet[3150]: W0123 18:58:04.443867 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.444769 kubelet[3150]: E0123 18:58:04.444283 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.444769 kubelet[3150]: E0123 18:58:04.444506 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.444769 kubelet[3150]: W0123 18:58:04.444516 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.444769 kubelet[3150]: E0123 18:58:04.444528 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.444769 kubelet[3150]: E0123 18:58:04.444647 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.444769 kubelet[3150]: W0123 18:58:04.444653 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.444769 kubelet[3150]: E0123 18:58:04.444660 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.445146 kubelet[3150]: E0123 18:58:04.445133 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.445146 kubelet[3150]: W0123 18:58:04.445143 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.445204 kubelet[3150]: E0123 18:58:04.445153 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.451045 kubelet[3150]: E0123 18:58:04.451029 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.451120 kubelet[3150]: W0123 18:58:04.451045 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.451120 kubelet[3150]: E0123 18:58:04.451060 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.451426 kubelet[3150]: E0123 18:58:04.451206 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.451426 kubelet[3150]: W0123 18:58:04.451211 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.451426 kubelet[3150]: E0123 18:58:04.451220 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.454244 kubelet[3150]: E0123 18:58:04.454133 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.454244 kubelet[3150]: W0123 18:58:04.454150 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.454244 kubelet[3150]: E0123 18:58:04.454165 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.454786 kubelet[3150]: E0123 18:58:04.454558 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.454786 kubelet[3150]: W0123 18:58:04.454569 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.454786 kubelet[3150]: E0123 18:58:04.454581 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.457919 kubelet[3150]: E0123 18:58:04.457900 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.457919 kubelet[3150]: W0123 18:58:04.457919 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.458024 kubelet[3150]: E0123 18:58:04.457932 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.458147 kubelet[3150]: E0123 18:58:04.458113 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.458147 kubelet[3150]: W0123 18:58:04.458121 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.458147 kubelet[3150]: E0123 18:58:04.458129 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.458310 kubelet[3150]: E0123 18:58:04.458252 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.458310 kubelet[3150]: W0123 18:58:04.458260 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.458310 kubelet[3150]: E0123 18:58:04.458267 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.458483 kubelet[3150]: E0123 18:58:04.458403 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.458483 kubelet[3150]: W0123 18:58:04.458410 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.458483 kubelet[3150]: E0123 18:58:04.458417 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.460730 kubelet[3150]: E0123 18:58:04.460363 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.460730 kubelet[3150]: W0123 18:58:04.460377 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.460730 kubelet[3150]: E0123 18:58:04.460395 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.460730 kubelet[3150]: E0123 18:58:04.460566 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.460730 kubelet[3150]: W0123 18:58:04.460573 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.460730 kubelet[3150]: E0123 18:58:04.460584 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.460945 kubelet[3150]: E0123 18:58:04.460758 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.460945 kubelet[3150]: W0123 18:58:04.460765 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.460945 kubelet[3150]: E0123 18:58:04.460774 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.460945 kubelet[3150]: E0123 18:58:04.460890 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.460945 kubelet[3150]: W0123 18:58:04.460897 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.460945 kubelet[3150]: E0123 18:58:04.460904 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.461081 kubelet[3150]: E0123 18:58:04.460992 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.461081 kubelet[3150]: W0123 18:58:04.460996 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.461081 kubelet[3150]: E0123 18:58:04.461003 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.462268 kubelet[3150]: E0123 18:58:04.461308 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.462268 kubelet[3150]: W0123 18:58:04.461318 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.462268 kubelet[3150]: E0123 18:58:04.461326 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.462268 kubelet[3150]: E0123 18:58:04.461886 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.462268 kubelet[3150]: W0123 18:58:04.461896 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.462268 kubelet[3150]: E0123 18:58:04.461909 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.477914 systemd[1]: Started cri-containerd-60587d53fd068fa79e39419167c76bc2486fc3660d56e4a925b0db7adbfee442.scope - libcontainer container 60587d53fd068fa79e39419167c76bc2486fc3660d56e4a925b0db7adbfee442. Jan 23 18:58:04.507368 kubelet[3150]: E0123 18:58:04.507280 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.507368 kubelet[3150]: W0123 18:58:04.507298 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.507368 kubelet[3150]: E0123 18:58:04.507315 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.507876 kubelet[3150]: E0123 18:58:04.507817 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.507876 kubelet[3150]: W0123 18:58:04.507830 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.507876 kubelet[3150]: E0123 18:58:04.507843 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.508228 kubelet[3150]: E0123 18:58:04.508165 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.508228 kubelet[3150]: W0123 18:58:04.508182 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.508228 kubelet[3150]: E0123 18:58:04.508193 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.508690 kubelet[3150]: E0123 18:58:04.508622 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.508690 kubelet[3150]: W0123 18:58:04.508632 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.508690 kubelet[3150]: E0123 18:58:04.508642 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.509024 kubelet[3150]: E0123 18:58:04.509014 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.509201 kubelet[3150]: W0123 18:58:04.509155 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.509201 kubelet[3150]: E0123 18:58:04.509169 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.509587 kubelet[3150]: E0123 18:58:04.509519 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.509587 kubelet[3150]: W0123 18:58:04.509532 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.509587 kubelet[3150]: E0123 18:58:04.509544 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.510008 kubelet[3150]: E0123 18:58:04.509998 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.510101 kubelet[3150]: W0123 18:58:04.510047 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.510101 kubelet[3150]: E0123 18:58:04.510061 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.510477 kubelet[3150]: E0123 18:58:04.510378 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.510477 kubelet[3150]: W0123 18:58:04.510390 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.510477 kubelet[3150]: E0123 18:58:04.510401 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.510994 kubelet[3150]: E0123 18:58:04.510894 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.511112 kubelet[3150]: W0123 18:58:04.511057 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.511112 kubelet[3150]: E0123 18:58:04.511074 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.511468 kubelet[3150]: E0123 18:58:04.511445 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.511735 kubelet[3150]: W0123 18:58:04.511592 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.511735 kubelet[3150]: E0123 18:58:04.511608 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.512106 kubelet[3150]: E0123 18:58:04.512038 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.512106 kubelet[3150]: W0123 18:58:04.512050 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.512106 kubelet[3150]: E0123 18:58:04.512062 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.512646 kubelet[3150]: E0123 18:58:04.512571 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.512646 kubelet[3150]: W0123 18:58:04.512585 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.512646 kubelet[3150]: E0123 18:58:04.512598 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.513023 kubelet[3150]: E0123 18:58:04.512897 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.513023 kubelet[3150]: W0123 18:58:04.512907 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.513023 kubelet[3150]: E0123 18:58:04.512917 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.513370 kubelet[3150]: E0123 18:58:04.513300 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.513370 kubelet[3150]: W0123 18:58:04.513311 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.513370 kubelet[3150]: E0123 18:58:04.513323 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.513662 kubelet[3150]: E0123 18:58:04.513613 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.513662 kubelet[3150]: W0123 18:58:04.513621 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.513662 kubelet[3150]: E0123 18:58:04.513630 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.513915 kubelet[3150]: E0123 18:58:04.513903 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.514034 kubelet[3150]: W0123 18:58:04.513972 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.514034 kubelet[3150]: E0123 18:58:04.513985 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.514267 kubelet[3150]: E0123 18:58:04.514232 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.514267 kubelet[3150]: W0123 18:58:04.514242 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.514386 kubelet[3150]: E0123 18:58:04.514326 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.514636 kubelet[3150]: E0123 18:58:04.514567 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.514636 kubelet[3150]: W0123 18:58:04.514577 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.514636 kubelet[3150]: E0123 18:58:04.514600 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.515058 kubelet[3150]: E0123 18:58:04.514933 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.515058 kubelet[3150]: W0123 18:58:04.514944 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.515222 kubelet[3150]: E0123 18:58:04.514955 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.515435 kubelet[3150]: E0123 18:58:04.515326 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.515435 kubelet[3150]: W0123 18:58:04.515345 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.515435 kubelet[3150]: E0123 18:58:04.515356 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.530488 kubelet[3150]: E0123 18:58:04.530470 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.530488 kubelet[3150]: W0123 18:58:04.530483 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.530585 kubelet[3150]: E0123 18:58:04.530494 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.530585 kubelet[3150]: I0123 18:58:04.530518 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ff2219dc-84d2-48b9-9cbc-2450678772ac-varrun\") pod \"csi-node-driver-w8lzq\" (UID: \"ff2219dc-84d2-48b9-9cbc-2450678772ac\") " pod="calico-system/csi-node-driver-w8lzq" Jan 23 18:58:04.530776 kubelet[3150]: E0123 18:58:04.530765 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.530811 kubelet[3150]: W0123 18:58:04.530777 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.530811 kubelet[3150]: E0123 18:58:04.530787 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.530811 kubelet[3150]: I0123 18:58:04.530806 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ff2219dc-84d2-48b9-9cbc-2450678772ac-kubelet-dir\") pod \"csi-node-driver-w8lzq\" (UID: \"ff2219dc-84d2-48b9-9cbc-2450678772ac\") " pod="calico-system/csi-node-driver-w8lzq" Jan 23 18:58:04.531832 kubelet[3150]: E0123 18:58:04.530913 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.531832 kubelet[3150]: W0123 18:58:04.530919 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.531832 kubelet[3150]: E0123 18:58:04.530926 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.531832 kubelet[3150]: I0123 18:58:04.530938 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ff2219dc-84d2-48b9-9cbc-2450678772ac-socket-dir\") pod \"csi-node-driver-w8lzq\" (UID: \"ff2219dc-84d2-48b9-9cbc-2450678772ac\") " pod="calico-system/csi-node-driver-w8lzq" Jan 23 18:58:04.531832 kubelet[3150]: E0123 18:58:04.531030 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.531832 kubelet[3150]: W0123 18:58:04.531035 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.531832 kubelet[3150]: E0123 18:58:04.531041 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.531832 kubelet[3150]: I0123 18:58:04.531053 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxqvh\" (UniqueName: \"kubernetes.io/projected/ff2219dc-84d2-48b9-9cbc-2450678772ac-kube-api-access-lxqvh\") pod \"csi-node-driver-w8lzq\" (UID: \"ff2219dc-84d2-48b9-9cbc-2450678772ac\") " pod="calico-system/csi-node-driver-w8lzq" Jan 23 18:58:04.531832 kubelet[3150]: E0123 18:58:04.531136 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.532146 kubelet[3150]: W0123 18:58:04.531142 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.532146 kubelet[3150]: E0123 18:58:04.531148 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.532146 kubelet[3150]: I0123 18:58:04.531163 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ff2219dc-84d2-48b9-9cbc-2450678772ac-registration-dir\") pod \"csi-node-driver-w8lzq\" (UID: \"ff2219dc-84d2-48b9-9cbc-2450678772ac\") " pod="calico-system/csi-node-driver-w8lzq" Jan 23 18:58:04.532146 kubelet[3150]: E0123 18:58:04.531265 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.532146 kubelet[3150]: W0123 18:58:04.531273 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.532146 kubelet[3150]: E0123 18:58:04.531279 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.532146 kubelet[3150]: E0123 18:58:04.531349 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.532146 kubelet[3150]: W0123 18:58:04.531354 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.532146 kubelet[3150]: E0123 18:58:04.531360 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.532397 kubelet[3150]: E0123 18:58:04.531451 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.532397 kubelet[3150]: W0123 18:58:04.531456 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.532397 kubelet[3150]: E0123 18:58:04.531461 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.532397 kubelet[3150]: E0123 18:58:04.531532 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.532397 kubelet[3150]: W0123 18:58:04.531537 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.532397 kubelet[3150]: E0123 18:58:04.531543 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.532397 kubelet[3150]: E0123 18:58:04.531622 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.532397 kubelet[3150]: W0123 18:58:04.531626 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.532397 kubelet[3150]: E0123 18:58:04.531632 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.532397 kubelet[3150]: E0123 18:58:04.531733 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.532620 kubelet[3150]: W0123 18:58:04.531739 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.532620 kubelet[3150]: E0123 18:58:04.531746 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.532620 kubelet[3150]: E0123 18:58:04.531855 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.532620 kubelet[3150]: W0123 18:58:04.531861 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.532620 kubelet[3150]: E0123 18:58:04.531868 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.532620 kubelet[3150]: E0123 18:58:04.531960 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.532620 kubelet[3150]: W0123 18:58:04.531966 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.532620 kubelet[3150]: E0123 18:58:04.531972 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.532620 kubelet[3150]: E0123 18:58:04.532072 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.532620 kubelet[3150]: W0123 18:58:04.532078 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.532876 kubelet[3150]: E0123 18:58:04.532084 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.532876 kubelet[3150]: E0123 18:58:04.532164 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.532876 kubelet[3150]: W0123 18:58:04.532170 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.532876 kubelet[3150]: E0123 18:58:04.532177 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.594451 containerd[1705]: time="2026-01-23T18:58:04.594342949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8x4kv,Uid:50413c8b-10cd-40ff-94c8-e1b15a6204fa,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:04.595694 containerd[1705]: time="2026-01-23T18:58:04.595613773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-87988899d-6rjvg,Uid:70a14ea9-c6f4-4aef-9421-236d6582279c,Namespace:calico-system,Attempt:0,} returns sandbox id \"60587d53fd068fa79e39419167c76bc2486fc3660d56e4a925b0db7adbfee442\"" Jan 23 18:58:04.597558 containerd[1705]: time="2026-01-23T18:58:04.597406870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 18:58:04.631662 kubelet[3150]: E0123 18:58:04.631640 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.631662 kubelet[3150]: W0123 18:58:04.631659 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.631843 kubelet[3150]: E0123 18:58:04.631712 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.631893 kubelet[3150]: E0123 18:58:04.631882 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.631893 kubelet[3150]: W0123 18:58:04.631891 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.631989 kubelet[3150]: E0123 18:58:04.631900 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.632179 kubelet[3150]: E0123 18:58:04.632149 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.632179 kubelet[3150]: W0123 18:58:04.632171 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.632288 kubelet[3150]: E0123 18:58:04.632180 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.632324 kubelet[3150]: E0123 18:58:04.632298 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.632324 kubelet[3150]: W0123 18:58:04.632303 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.632324 kubelet[3150]: E0123 18:58:04.632311 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.632464 kubelet[3150]: E0123 18:58:04.632446 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.632464 kubelet[3150]: W0123 18:58:04.632454 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.632464 kubelet[3150]: E0123 18:58:04.632461 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.632618 kubelet[3150]: E0123 18:58:04.632592 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.632618 kubelet[3150]: W0123 18:58:04.632611 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.632696 kubelet[3150]: E0123 18:58:04.632619 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.632761 kubelet[3150]: E0123 18:58:04.632750 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.632761 kubelet[3150]: W0123 18:58:04.632758 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.632844 kubelet[3150]: E0123 18:58:04.632766 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.632878 kubelet[3150]: E0123 18:58:04.632872 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.632949 kubelet[3150]: W0123 18:58:04.632879 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.632949 kubelet[3150]: E0123 18:58:04.632885 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.633053 kubelet[3150]: E0123 18:58:04.633039 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.633053 kubelet[3150]: W0123 18:58:04.633050 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.633106 kubelet[3150]: E0123 18:58:04.633059 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.633178 kubelet[3150]: E0123 18:58:04.633167 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.633178 kubelet[3150]: W0123 18:58:04.633176 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.633227 kubelet[3150]: E0123 18:58:04.633182 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.633320 kubelet[3150]: E0123 18:58:04.633301 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.633320 kubelet[3150]: W0123 18:58:04.633316 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.633393 kubelet[3150]: E0123 18:58:04.633324 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.633606 kubelet[3150]: E0123 18:58:04.633511 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.633606 kubelet[3150]: W0123 18:58:04.633521 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.633606 kubelet[3150]: E0123 18:58:04.633529 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.636250 kubelet[3150]: E0123 18:58:04.633636 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.636250 kubelet[3150]: W0123 18:58:04.633642 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.636250 kubelet[3150]: E0123 18:58:04.633649 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.636250 kubelet[3150]: E0123 18:58:04.633822 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.636250 kubelet[3150]: W0123 18:58:04.633828 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.636250 kubelet[3150]: E0123 18:58:04.633833 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.636250 kubelet[3150]: E0123 18:58:04.633965 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.636250 kubelet[3150]: W0123 18:58:04.633970 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.636250 kubelet[3150]: E0123 18:58:04.633983 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.636250 kubelet[3150]: E0123 18:58:04.634093 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.636503 kubelet[3150]: W0123 18:58:04.634105 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.636503 kubelet[3150]: E0123 18:58:04.634112 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.636503 kubelet[3150]: E0123 18:58:04.634256 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.636503 kubelet[3150]: W0123 18:58:04.634271 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.636503 kubelet[3150]: E0123 18:58:04.634276 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.636503 kubelet[3150]: E0123 18:58:04.634464 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.636503 kubelet[3150]: W0123 18:58:04.634471 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.636503 kubelet[3150]: E0123 18:58:04.634478 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.636503 kubelet[3150]: E0123 18:58:04.634711 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.636503 kubelet[3150]: W0123 18:58:04.634718 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.636771 kubelet[3150]: E0123 18:58:04.634728 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.636771 kubelet[3150]: E0123 18:58:04.634871 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.636771 kubelet[3150]: W0123 18:58:04.634879 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.636771 kubelet[3150]: E0123 18:58:04.634886 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.636771 kubelet[3150]: E0123 18:58:04.635027 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.636771 kubelet[3150]: W0123 18:58:04.635033 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.636771 kubelet[3150]: E0123 18:58:04.635040 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.636771 kubelet[3150]: E0123 18:58:04.635158 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.636771 kubelet[3150]: W0123 18:58:04.635164 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.636771 kubelet[3150]: E0123 18:58:04.635171 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.637004 kubelet[3150]: E0123 18:58:04.635306 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.637004 kubelet[3150]: W0123 18:58:04.635311 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.637004 kubelet[3150]: E0123 18:58:04.635317 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.637004 kubelet[3150]: E0123 18:58:04.635423 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.637004 kubelet[3150]: W0123 18:58:04.635432 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.637004 kubelet[3150]: E0123 18:58:04.635450 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.637004 kubelet[3150]: E0123 18:58:04.635652 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.637004 kubelet[3150]: W0123 18:58:04.635665 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.637004 kubelet[3150]: E0123 18:58:04.635698 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.646579 kubelet[3150]: E0123 18:58:04.646559 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:04.646579 kubelet[3150]: W0123 18:58:04.646574 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:04.646696 kubelet[3150]: E0123 18:58:04.646589 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:04.683360 containerd[1705]: time="2026-01-23T18:58:04.683046952Z" level=info msg="connecting to shim a68d480287c5088609985e5c526b02772bb75a3cbbb4124bdf02de131d84dc16" address="unix:///run/containerd/s/584b3c82caa423b15b981c99611e01f8ea095af82c7aba5dcd4e28ee5372a253" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:04.707845 systemd[1]: Started cri-containerd-a68d480287c5088609985e5c526b02772bb75a3cbbb4124bdf02de131d84dc16.scope - libcontainer container a68d480287c5088609985e5c526b02772bb75a3cbbb4124bdf02de131d84dc16. Jan 23 18:58:04.734870 containerd[1705]: time="2026-01-23T18:58:04.734843581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8x4kv,Uid:50413c8b-10cd-40ff-94c8-e1b15a6204fa,Namespace:calico-system,Attempt:0,} returns sandbox id \"a68d480287c5088609985e5c526b02772bb75a3cbbb4124bdf02de131d84dc16\"" Jan 23 18:58:05.980890 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2569882213.mount: Deactivated successfully. Jan 23 18:58:06.020351 kubelet[3150]: E0123 18:58:06.020300 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:58:06.467309 containerd[1705]: time="2026-01-23T18:58:06.467203101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:06.470331 containerd[1705]: time="2026-01-23T18:58:06.470292991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 23 18:58:06.473709 containerd[1705]: time="2026-01-23T18:58:06.473657810Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:06.479699 containerd[1705]: time="2026-01-23T18:58:06.479446672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:06.480833 containerd[1705]: time="2026-01-23T18:58:06.480296973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 1.882848952s" Jan 23 18:58:06.480833 containerd[1705]: time="2026-01-23T18:58:06.480719682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 23 18:58:06.483982 containerd[1705]: time="2026-01-23T18:58:06.483820176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 18:58:06.502115 containerd[1705]: time="2026-01-23T18:58:06.502088638Z" level=info msg="CreateContainer within sandbox \"60587d53fd068fa79e39419167c76bc2486fc3660d56e4a925b0db7adbfee442\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 18:58:06.522074 containerd[1705]: time="2026-01-23T18:58:06.520448842Z" level=info msg="Container b47a9a2a5abb6492328233ffbe5472114217c2de119228886fc1c10a520f7ca3: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:06.544203 containerd[1705]: time="2026-01-23T18:58:06.544175636Z" level=info msg="CreateContainer within sandbox \"60587d53fd068fa79e39419167c76bc2486fc3660d56e4a925b0db7adbfee442\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"b47a9a2a5abb6492328233ffbe5472114217c2de119228886fc1c10a520f7ca3\"" Jan 23 18:58:06.544692 containerd[1705]: time="2026-01-23T18:58:06.544580164Z" level=info msg="StartContainer for \"b47a9a2a5abb6492328233ffbe5472114217c2de119228886fc1c10a520f7ca3\"" Jan 23 18:58:06.546075 containerd[1705]: time="2026-01-23T18:58:06.546034597Z" level=info msg="connecting to shim b47a9a2a5abb6492328233ffbe5472114217c2de119228886fc1c10a520f7ca3" address="unix:///run/containerd/s/59d643189be74947da35ffc330946802984ba85c6371997fec8112f37bc821ea" protocol=ttrpc version=3 Jan 23 18:58:06.569838 systemd[1]: Started cri-containerd-b47a9a2a5abb6492328233ffbe5472114217c2de119228886fc1c10a520f7ca3.scope - libcontainer container b47a9a2a5abb6492328233ffbe5472114217c2de119228886fc1c10a520f7ca3. Jan 23 18:58:06.617106 containerd[1705]: time="2026-01-23T18:58:06.617063055Z" level=info msg="StartContainer for \"b47a9a2a5abb6492328233ffbe5472114217c2de119228886fc1c10a520f7ca3\" returns successfully" Jan 23 18:58:07.131334 kubelet[3150]: E0123 18:58:07.131304 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.131334 kubelet[3150]: W0123 18:58:07.131323 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.131334 kubelet[3150]: E0123 18:58:07.131342 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.131817 kubelet[3150]: E0123 18:58:07.131552 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.131817 kubelet[3150]: W0123 18:58:07.131561 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.131817 kubelet[3150]: E0123 18:58:07.131570 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.131817 kubelet[3150]: E0123 18:58:07.131707 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.131817 kubelet[3150]: W0123 18:58:07.131712 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.131817 kubelet[3150]: E0123 18:58:07.131719 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.131975 kubelet[3150]: E0123 18:58:07.131876 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.131975 kubelet[3150]: W0123 18:58:07.131882 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.131975 kubelet[3150]: E0123 18:58:07.131889 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.132065 kubelet[3150]: E0123 18:58:07.132028 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.132065 kubelet[3150]: W0123 18:58:07.132034 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.132065 kubelet[3150]: E0123 18:58:07.132040 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.132150 kubelet[3150]: E0123 18:58:07.132144 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.132178 kubelet[3150]: W0123 18:58:07.132150 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.132178 kubelet[3150]: E0123 18:58:07.132157 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.132288 kubelet[3150]: E0123 18:58:07.132274 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.132288 kubelet[3150]: W0123 18:58:07.132282 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.132344 kubelet[3150]: E0123 18:58:07.132288 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.132396 kubelet[3150]: E0123 18:58:07.132385 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.132396 kubelet[3150]: W0123 18:58:07.132393 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.132453 kubelet[3150]: E0123 18:58:07.132400 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.132523 kubelet[3150]: E0123 18:58:07.132513 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.132523 kubelet[3150]: W0123 18:58:07.132520 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.132581 kubelet[3150]: E0123 18:58:07.132527 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.132644 kubelet[3150]: E0123 18:58:07.132635 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.132696 kubelet[3150]: W0123 18:58:07.132649 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.132696 kubelet[3150]: E0123 18:58:07.132655 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.132785 kubelet[3150]: E0123 18:58:07.132774 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.132785 kubelet[3150]: W0123 18:58:07.132781 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.132840 kubelet[3150]: E0123 18:58:07.132788 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.132900 kubelet[3150]: E0123 18:58:07.132890 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.132900 kubelet[3150]: W0123 18:58:07.132897 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.132950 kubelet[3150]: E0123 18:58:07.132903 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.133041 kubelet[3150]: E0123 18:58:07.133031 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.133041 kubelet[3150]: W0123 18:58:07.133038 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.133097 kubelet[3150]: E0123 18:58:07.133044 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.133151 kubelet[3150]: E0123 18:58:07.133140 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.133151 kubelet[3150]: W0123 18:58:07.133147 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.133208 kubelet[3150]: E0123 18:58:07.133163 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.133279 kubelet[3150]: E0123 18:58:07.133269 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.133279 kubelet[3150]: W0123 18:58:07.133276 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.133323 kubelet[3150]: E0123 18:58:07.133282 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.147535 kubelet[3150]: E0123 18:58:07.147512 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.147535 kubelet[3150]: W0123 18:58:07.147528 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.147700 kubelet[3150]: E0123 18:58:07.147549 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.147732 kubelet[3150]: E0123 18:58:07.147726 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.147754 kubelet[3150]: W0123 18:58:07.147744 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.147779 kubelet[3150]: E0123 18:58:07.147754 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.147948 kubelet[3150]: E0123 18:58:07.147936 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.147948 kubelet[3150]: W0123 18:58:07.147945 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.148019 kubelet[3150]: E0123 18:58:07.147954 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.148128 kubelet[3150]: E0123 18:58:07.148111 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.148128 kubelet[3150]: W0123 18:58:07.148119 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.148128 kubelet[3150]: E0123 18:58:07.148126 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.148261 kubelet[3150]: E0123 18:58:07.148251 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.148261 kubelet[3150]: W0123 18:58:07.148259 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.148319 kubelet[3150]: E0123 18:58:07.148265 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.148382 kubelet[3150]: E0123 18:58:07.148372 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.148382 kubelet[3150]: W0123 18:58:07.148380 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.148437 kubelet[3150]: E0123 18:58:07.148387 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.148544 kubelet[3150]: E0123 18:58:07.148528 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.148544 kubelet[3150]: W0123 18:58:07.148536 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.148587 kubelet[3150]: E0123 18:58:07.148544 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.148824 kubelet[3150]: E0123 18:58:07.148812 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.148824 kubelet[3150]: W0123 18:58:07.148821 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.148895 kubelet[3150]: E0123 18:58:07.148829 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.148943 kubelet[3150]: E0123 18:58:07.148934 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.148943 kubelet[3150]: W0123 18:58:07.148942 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.149004 kubelet[3150]: E0123 18:58:07.148950 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.149055 kubelet[3150]: E0123 18:58:07.149045 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.149055 kubelet[3150]: W0123 18:58:07.149052 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.149117 kubelet[3150]: E0123 18:58:07.149058 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.149183 kubelet[3150]: E0123 18:58:07.149174 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.149183 kubelet[3150]: W0123 18:58:07.149181 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.149245 kubelet[3150]: E0123 18:58:07.149187 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.149347 kubelet[3150]: E0123 18:58:07.149324 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.149384 kubelet[3150]: W0123 18:58:07.149349 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.149384 kubelet[3150]: E0123 18:58:07.149358 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.149524 kubelet[3150]: E0123 18:58:07.149511 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.149524 kubelet[3150]: W0123 18:58:07.149521 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.149605 kubelet[3150]: E0123 18:58:07.149529 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.149948 kubelet[3150]: E0123 18:58:07.149931 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.149948 kubelet[3150]: W0123 18:58:07.149949 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.150044 kubelet[3150]: E0123 18:58:07.149963 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.150087 kubelet[3150]: E0123 18:58:07.150074 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.150087 kubelet[3150]: W0123 18:58:07.150085 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.150131 kubelet[3150]: E0123 18:58:07.150092 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.150235 kubelet[3150]: E0123 18:58:07.150224 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.150235 kubelet[3150]: W0123 18:58:07.150232 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.150284 kubelet[3150]: E0123 18:58:07.150239 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.150438 kubelet[3150]: E0123 18:58:07.150426 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.150438 kubelet[3150]: W0123 18:58:07.150434 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.150491 kubelet[3150]: E0123 18:58:07.150441 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.150561 kubelet[3150]: E0123 18:58:07.150551 3150 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 18:58:07.150561 kubelet[3150]: W0123 18:58:07.150559 3150 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 18:58:07.150601 kubelet[3150]: E0123 18:58:07.150565 3150 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 18:58:07.650858 containerd[1705]: time="2026-01-23T18:58:07.650814023Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:07.653541 containerd[1705]: time="2026-01-23T18:58:07.653508468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 23 18:58:07.656969 containerd[1705]: time="2026-01-23T18:58:07.656921895Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:07.661726 containerd[1705]: time="2026-01-23T18:58:07.661659538Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:07.662256 containerd[1705]: time="2026-01-23T18:58:07.662096011Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.177971344s" Jan 23 18:58:07.662256 containerd[1705]: time="2026-01-23T18:58:07.662126084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 23 18:58:07.670273 containerd[1705]: time="2026-01-23T18:58:07.670159819Z" level=info msg="CreateContainer within sandbox \"a68d480287c5088609985e5c526b02772bb75a3cbbb4124bdf02de131d84dc16\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 18:58:07.690698 containerd[1705]: time="2026-01-23T18:58:07.689328676Z" level=info msg="Container 6737d290da925ed962c4ce0c41a8424038163a13e030ec7846e0c0d7bc704a98: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:07.713787 containerd[1705]: time="2026-01-23T18:58:07.713756650Z" level=info msg="CreateContainer within sandbox \"a68d480287c5088609985e5c526b02772bb75a3cbbb4124bdf02de131d84dc16\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6737d290da925ed962c4ce0c41a8424038163a13e030ec7846e0c0d7bc704a98\"" Jan 23 18:58:07.714253 containerd[1705]: time="2026-01-23T18:58:07.714181537Z" level=info msg="StartContainer for \"6737d290da925ed962c4ce0c41a8424038163a13e030ec7846e0c0d7bc704a98\"" Jan 23 18:58:07.715774 containerd[1705]: time="2026-01-23T18:58:07.715659928Z" level=info msg="connecting to shim 6737d290da925ed962c4ce0c41a8424038163a13e030ec7846e0c0d7bc704a98" address="unix:///run/containerd/s/584b3c82caa423b15b981c99611e01f8ea095af82c7aba5dcd4e28ee5372a253" protocol=ttrpc version=3 Jan 23 18:58:07.737811 systemd[1]: Started cri-containerd-6737d290da925ed962c4ce0c41a8424038163a13e030ec7846e0c0d7bc704a98.scope - libcontainer container 6737d290da925ed962c4ce0c41a8424038163a13e030ec7846e0c0d7bc704a98. Jan 23 18:58:07.794865 containerd[1705]: time="2026-01-23T18:58:07.794838866Z" level=info msg="StartContainer for \"6737d290da925ed962c4ce0c41a8424038163a13e030ec7846e0c0d7bc704a98\" returns successfully" Jan 23 18:58:07.798165 systemd[1]: cri-containerd-6737d290da925ed962c4ce0c41a8424038163a13e030ec7846e0c0d7bc704a98.scope: Deactivated successfully. Jan 23 18:58:07.802557 containerd[1705]: time="2026-01-23T18:58:07.802457473Z" level=info msg="received container exit event container_id:\"6737d290da925ed962c4ce0c41a8424038163a13e030ec7846e0c0d7bc704a98\" id:\"6737d290da925ed962c4ce0c41a8424038163a13e030ec7846e0c0d7bc704a98\" pid:3850 exited_at:{seconds:1769194687 nanos:801997202}" Jan 23 18:58:07.818902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6737d290da925ed962c4ce0c41a8424038163a13e030ec7846e0c0d7bc704a98-rootfs.mount: Deactivated successfully. Jan 23 18:58:08.019807 kubelet[3150]: E0123 18:58:08.019760 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:58:08.113516 kubelet[3150]: I0123 18:58:08.113489 3150 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:58:08.128204 kubelet[3150]: I0123 18:58:08.127632 3150 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-87988899d-6rjvg" podStartSLOduration=2.242638926 podStartE2EDuration="4.127615573s" podCreationTimestamp="2026-01-23 18:58:04 +0000 UTC" firstStartedPulling="2026-01-23 18:58:04.596864807 +0000 UTC m=+19.669365433" lastFinishedPulling="2026-01-23 18:58:06.481841455 +0000 UTC m=+21.554342080" observedRunningTime="2026-01-23 18:58:07.121944902 +0000 UTC m=+22.194445535" watchObservedRunningTime="2026-01-23 18:58:08.127615573 +0000 UTC m=+23.200116207" Jan 23 18:58:10.019916 kubelet[3150]: E0123 18:58:10.019875 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:58:10.121907 containerd[1705]: time="2026-01-23T18:58:10.121866013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 18:58:12.020383 kubelet[3150]: E0123 18:58:12.020343 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:58:13.718944 containerd[1705]: time="2026-01-23T18:58:13.718903166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:13.722095 containerd[1705]: time="2026-01-23T18:58:13.722060883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 23 18:58:13.725457 containerd[1705]: time="2026-01-23T18:58:13.725396264Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:13.729365 containerd[1705]: time="2026-01-23T18:58:13.729323325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:13.729966 containerd[1705]: time="2026-01-23T18:58:13.729790124Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.607883014s" Jan 23 18:58:13.729966 containerd[1705]: time="2026-01-23T18:58:13.729819131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 23 18:58:13.736706 containerd[1705]: time="2026-01-23T18:58:13.736653166Z" level=info msg="CreateContainer within sandbox \"a68d480287c5088609985e5c526b02772bb75a3cbbb4124bdf02de131d84dc16\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 18:58:13.755693 containerd[1705]: time="2026-01-23T18:58:13.755657485Z" level=info msg="Container 917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:13.774572 containerd[1705]: time="2026-01-23T18:58:13.774546161Z" level=info msg="CreateContainer within sandbox \"a68d480287c5088609985e5c526b02772bb75a3cbbb4124bdf02de131d84dc16\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3\"" Jan 23 18:58:13.775217 containerd[1705]: time="2026-01-23T18:58:13.774987534Z" level=info msg="StartContainer for \"917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3\"" Jan 23 18:58:13.776569 containerd[1705]: time="2026-01-23T18:58:13.776538981Z" level=info msg="connecting to shim 917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3" address="unix:///run/containerd/s/584b3c82caa423b15b981c99611e01f8ea095af82c7aba5dcd4e28ee5372a253" protocol=ttrpc version=3 Jan 23 18:58:13.794849 systemd[1]: Started cri-containerd-917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3.scope - libcontainer container 917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3. Jan 23 18:58:13.848267 containerd[1705]: time="2026-01-23T18:58:13.848234163Z" level=info msg="StartContainer for \"917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3\" returns successfully" Jan 23 18:58:14.019902 kubelet[3150]: E0123 18:58:14.019865 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:58:14.990050 systemd[1]: cri-containerd-917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3.scope: Deactivated successfully. Jan 23 18:58:14.990324 systemd[1]: cri-containerd-917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3.scope: Consumed 451ms CPU time, 195.2M memory peak, 171.3M written to disk. Jan 23 18:58:14.991962 containerd[1705]: time="2026-01-23T18:58:14.991908716Z" level=info msg="received container exit event container_id:\"917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3\" id:\"917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3\" pid:3908 exited_at:{seconds:1769194694 nanos:991648617}" Jan 23 18:58:15.012393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-917f56f8d6517194d08ee1e97881ec51f9abff1a35a4e3b4da9b5ef5d3fb18e3-rootfs.mount: Deactivated successfully. Jan 23 18:58:15.048568 kubelet[3150]: I0123 18:58:15.048548 3150 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 18:58:15.253869 systemd[1]: Created slice kubepods-besteffort-pod55e05950_1192_4962_a3ec_2be48b4bca2a.slice - libcontainer container kubepods-besteffort-pod55e05950_1192_4962_a3ec_2be48b4bca2a.slice. Jan 23 18:58:15.292526 kubelet[3150]: I0123 18:58:15.292497 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55e05950-1192-4962-a3ec-2be48b4bca2a-tigera-ca-bundle\") pod \"calico-kube-controllers-8684856889-72bpc\" (UID: \"55e05950-1192-4962-a3ec-2be48b4bca2a\") " pod="calico-system/calico-kube-controllers-8684856889-72bpc" Jan 23 18:58:15.292526 kubelet[3150]: I0123 18:58:15.292532 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcm9b\" (UniqueName: \"kubernetes.io/projected/55e05950-1192-4962-a3ec-2be48b4bca2a-kube-api-access-bcm9b\") pod \"calico-kube-controllers-8684856889-72bpc\" (UID: \"55e05950-1192-4962-a3ec-2be48b4bca2a\") " pod="calico-system/calico-kube-controllers-8684856889-72bpc" Jan 23 18:58:15.395547 systemd[1]: Created slice kubepods-burstable-podfebc0b92_9a4e_4d79_856f_cbca4e1c9361.slice - libcontainer container kubepods-burstable-podfebc0b92_9a4e_4d79_856f_cbca4e1c9361.slice. Jan 23 18:58:15.493819 kubelet[3150]: I0123 18:58:15.493672 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfpl2\" (UniqueName: \"kubernetes.io/projected/febc0b92-9a4e-4d79-856f-cbca4e1c9361-kube-api-access-pfpl2\") pod \"coredns-66bc5c9577-7q979\" (UID: \"febc0b92-9a4e-4d79-856f-cbca4e1c9361\") " pod="kube-system/coredns-66bc5c9577-7q979" Jan 23 18:58:15.493965 kubelet[3150]: I0123 18:58:15.493825 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/febc0b92-9a4e-4d79-856f-cbca4e1c9361-config-volume\") pod \"coredns-66bc5c9577-7q979\" (UID: \"febc0b92-9a4e-4d79-856f-cbca4e1c9361\") " pod="kube-system/coredns-66bc5c9577-7q979" Jan 23 18:58:15.493965 kubelet[3150]: I0123 18:58:15.493929 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3cd71195-3bad-4889-98d9-2a171485ef11-calico-apiserver-certs\") pod \"calico-apiserver-d6498b48f-btb4t\" (UID: \"3cd71195-3bad-4889-98d9-2a171485ef11\") " pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" Jan 23 18:58:15.493965 kubelet[3150]: I0123 18:58:15.493952 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lwfk\" (UniqueName: \"kubernetes.io/projected/3cd71195-3bad-4889-98d9-2a171485ef11-kube-api-access-2lwfk\") pod \"calico-apiserver-d6498b48f-btb4t\" (UID: \"3cd71195-3bad-4889-98d9-2a171485ef11\") " pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" Jan 23 18:58:15.497491 systemd[1]: Created slice kubepods-besteffort-pod3cd71195_3bad_4889_98d9_2a171485ef11.slice - libcontainer container kubepods-besteffort-pod3cd71195_3bad_4889_98d9_2a171485ef11.slice. Jan 23 18:58:15.849731 containerd[1705]: time="2026-01-23T18:58:15.848369308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8684856889-72bpc,Uid:55e05950-1192-4962-a3ec-2be48b4bca2a,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:15.852395 systemd[1]: Created slice kubepods-besteffort-pod5141cf66_e083_4b00_b023_85272b62f472.slice - libcontainer container kubepods-besteffort-pod5141cf66_e083_4b00_b023_85272b62f472.slice. Jan 23 18:58:15.895753 containerd[1705]: time="2026-01-23T18:58:15.895385541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7q979,Uid:febc0b92-9a4e-4d79-856f-cbca4e1c9361,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:15.897488 kubelet[3150]: I0123 18:58:15.897446 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7219b124-38e1-4c1e-9a91-813ecf13e9a5-config-volume\") pod \"coredns-66bc5c9577-cghv8\" (UID: \"7219b124-38e1-4c1e-9a91-813ecf13e9a5\") " pod="kube-system/coredns-66bc5c9577-cghv8" Jan 23 18:58:15.899193 kubelet[3150]: I0123 18:58:15.897494 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5141cf66-e083-4b00-b023-85272b62f472-calico-apiserver-certs\") pod \"calico-apiserver-d6498b48f-qmxqv\" (UID: \"5141cf66-e083-4b00-b023-85272b62f472\") " pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" Jan 23 18:58:15.899193 kubelet[3150]: I0123 18:58:15.898752 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2gw5\" (UniqueName: \"kubernetes.io/projected/5141cf66-e083-4b00-b023-85272b62f472-kube-api-access-v2gw5\") pod \"calico-apiserver-d6498b48f-qmxqv\" (UID: \"5141cf66-e083-4b00-b023-85272b62f472\") " pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" Jan 23 18:58:15.899193 kubelet[3150]: I0123 18:58:15.898794 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-897qg\" (UniqueName: \"kubernetes.io/projected/7219b124-38e1-4c1e-9a91-813ecf13e9a5-kube-api-access-897qg\") pod \"coredns-66bc5c9577-cghv8\" (UID: \"7219b124-38e1-4c1e-9a91-813ecf13e9a5\") " pod="kube-system/coredns-66bc5c9577-cghv8" Jan 23 18:58:15.908220 systemd[1]: Created slice kubepods-burstable-pod7219b124_38e1_4c1e_9a91_813ecf13e9a5.slice - libcontainer container kubepods-burstable-pod7219b124_38e1_4c1e_9a91_813ecf13e9a5.slice. Jan 23 18:58:15.913114 containerd[1705]: time="2026-01-23T18:58:15.912996767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6498b48f-btb4t,Uid:3cd71195-3bad-4889-98d9-2a171485ef11,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:15.922543 systemd[1]: Created slice kubepods-besteffort-podfea0c615_fd65_4cb0_8e2a_dc4b472b0f8f.slice - libcontainer container kubepods-besteffort-podfea0c615_fd65_4cb0_8e2a_dc4b472b0f8f.slice. Jan 23 18:58:15.932872 systemd[1]: Created slice kubepods-besteffort-pod5f08c0b6_1683_46fd_9e2c_63a438d878d9.slice - libcontainer container kubepods-besteffort-pod5f08c0b6_1683_46fd_9e2c_63a438d878d9.slice. Jan 23 18:58:15.942001 systemd[1]: Created slice kubepods-besteffort-pod41eb4aaa_d676_42e0_9ada_ef9e2ad0459b.slice - libcontainer container kubepods-besteffort-pod41eb4aaa_d676_42e0_9ada_ef9e2ad0459b.slice. Jan 23 18:58:15.999852 kubelet[3150]: I0123 18:58:15.999820 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9chxf\" (UniqueName: \"kubernetes.io/projected/41eb4aaa-d676-42e0-9ada-ef9e2ad0459b-kube-api-access-9chxf\") pod \"goldmane-7c778bb748-6qgwf\" (UID: \"41eb4aaa-d676-42e0-9ada-ef9e2ad0459b\") " pod="calico-system/goldmane-7c778bb748-6qgwf" Jan 23 18:58:16.000160 kubelet[3150]: I0123 18:58:16.000144 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5f08c0b6-1683-46fd-9e2c-63a438d878d9-calico-apiserver-certs\") pod \"calico-apiserver-67858cdf98-r62lm\" (UID: \"5f08c0b6-1683-46fd-9e2c-63a438d878d9\") " pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" Jan 23 18:58:16.000305 kubelet[3150]: I0123 18:58:16.000292 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwcdw\" (UniqueName: \"kubernetes.io/projected/5f08c0b6-1683-46fd-9e2c-63a438d878d9-kube-api-access-rwcdw\") pod \"calico-apiserver-67858cdf98-r62lm\" (UID: \"5f08c0b6-1683-46fd-9e2c-63a438d878d9\") " pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" Jan 23 18:58:16.000476 kubelet[3150]: I0123 18:58:16.000449 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-whisker-ca-bundle\") pod \"whisker-868995fc76-xb2lm\" (UID: \"fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f\") " pod="calico-system/whisker-868995fc76-xb2lm" Jan 23 18:58:16.000713 kubelet[3150]: I0123 18:58:16.000701 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/41eb4aaa-d676-42e0-9ada-ef9e2ad0459b-config\") pod \"goldmane-7c778bb748-6qgwf\" (UID: \"41eb4aaa-d676-42e0-9ada-ef9e2ad0459b\") " pod="calico-system/goldmane-7c778bb748-6qgwf" Jan 23 18:58:16.001067 kubelet[3150]: I0123 18:58:16.000891 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-whisker-backend-key-pair\") pod \"whisker-868995fc76-xb2lm\" (UID: \"fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f\") " pod="calico-system/whisker-868995fc76-xb2lm" Jan 23 18:58:16.001067 kubelet[3150]: I0123 18:58:16.000933 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/41eb4aaa-d676-42e0-9ada-ef9e2ad0459b-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-6qgwf\" (UID: \"41eb4aaa-d676-42e0-9ada-ef9e2ad0459b\") " pod="calico-system/goldmane-7c778bb748-6qgwf" Jan 23 18:58:16.001067 kubelet[3150]: I0123 18:58:16.000959 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/41eb4aaa-d676-42e0-9ada-ef9e2ad0459b-goldmane-key-pair\") pod \"goldmane-7c778bb748-6qgwf\" (UID: \"41eb4aaa-d676-42e0-9ada-ef9e2ad0459b\") " pod="calico-system/goldmane-7c778bb748-6qgwf" Jan 23 18:58:16.001067 kubelet[3150]: I0123 18:58:16.000978 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cbgv\" (UniqueName: \"kubernetes.io/projected/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-kube-api-access-9cbgv\") pod \"whisker-868995fc76-xb2lm\" (UID: \"fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f\") " pod="calico-system/whisker-868995fc76-xb2lm" Jan 23 18:58:16.043730 systemd[1]: Created slice kubepods-besteffort-podff2219dc_84d2_48b9_9cbc_2450678772ac.slice - libcontainer container kubepods-besteffort-podff2219dc_84d2_48b9_9cbc_2450678772ac.slice. Jan 23 18:58:16.053004 containerd[1705]: time="2026-01-23T18:58:16.052960227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w8lzq,Uid:ff2219dc-84d2-48b9-9cbc-2450678772ac,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:16.054986 containerd[1705]: time="2026-01-23T18:58:16.054523970Z" level=error msg="Failed to destroy network for sandbox \"8d17dedf1823e0e7e609d88833df76acdd81bef4b8159fdbd5bbce9a87af0707\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.056467 systemd[1]: run-netns-cni\x2d34bd0240\x2d9e2b\x2d689a\x2d84e6\x2d1e014865eff6.mount: Deactivated successfully. Jan 23 18:58:16.061758 containerd[1705]: time="2026-01-23T18:58:16.061717500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8684856889-72bpc,Uid:55e05950-1192-4962-a3ec-2be48b4bca2a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d17dedf1823e0e7e609d88833df76acdd81bef4b8159fdbd5bbce9a87af0707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.063740 kubelet[3150]: E0123 18:58:16.063695 3150 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d17dedf1823e0e7e609d88833df76acdd81bef4b8159fdbd5bbce9a87af0707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.064338 kubelet[3150]: E0123 18:58:16.063766 3150 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d17dedf1823e0e7e609d88833df76acdd81bef4b8159fdbd5bbce9a87af0707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8684856889-72bpc" Jan 23 18:58:16.064338 kubelet[3150]: E0123 18:58:16.063787 3150 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d17dedf1823e0e7e609d88833df76acdd81bef4b8159fdbd5bbce9a87af0707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8684856889-72bpc" Jan 23 18:58:16.064338 kubelet[3150]: E0123 18:58:16.063839 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8684856889-72bpc_calico-system(55e05950-1192-4962-a3ec-2be48b4bca2a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8684856889-72bpc_calico-system(55e05950-1192-4962-a3ec-2be48b4bca2a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d17dedf1823e0e7e609d88833df76acdd81bef4b8159fdbd5bbce9a87af0707\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 18:58:16.082831 containerd[1705]: time="2026-01-23T18:58:16.082759981Z" level=error msg="Failed to destroy network for sandbox \"f65b842d642b74a2ff097d72685c2e1315335f8b0615aaf9737c3bda382ad153\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.087551 containerd[1705]: time="2026-01-23T18:58:16.087488044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6498b48f-btb4t,Uid:3cd71195-3bad-4889-98d9-2a171485ef11,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65b842d642b74a2ff097d72685c2e1315335f8b0615aaf9737c3bda382ad153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.088255 kubelet[3150]: E0123 18:58:16.087739 3150 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65b842d642b74a2ff097d72685c2e1315335f8b0615aaf9737c3bda382ad153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.088255 kubelet[3150]: E0123 18:58:16.087780 3150 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65b842d642b74a2ff097d72685c2e1315335f8b0615aaf9737c3bda382ad153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" Jan 23 18:58:16.088255 kubelet[3150]: E0123 18:58:16.087794 3150 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f65b842d642b74a2ff097d72685c2e1315335f8b0615aaf9737c3bda382ad153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" Jan 23 18:58:16.088362 kubelet[3150]: E0123 18:58:16.087842 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d6498b48f-btb4t_calico-apiserver(3cd71195-3bad-4889-98d9-2a171485ef11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d6498b48f-btb4t_calico-apiserver(3cd71195-3bad-4889-98d9-2a171485ef11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f65b842d642b74a2ff097d72685c2e1315335f8b0615aaf9737c3bda382ad153\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 18:58:16.088550 containerd[1705]: time="2026-01-23T18:58:16.088499921Z" level=error msg="Failed to destroy network for sandbox \"2fb3e845fd13e93e2bf3775a819c912deb07db94469d128520d3cdf24622777e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.092137 containerd[1705]: time="2026-01-23T18:58:16.092080135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7q979,Uid:febc0b92-9a4e-4d79-856f-cbca4e1c9361,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb3e845fd13e93e2bf3775a819c912deb07db94469d128520d3cdf24622777e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.093539 kubelet[3150]: E0123 18:58:16.093510 3150 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb3e845fd13e93e2bf3775a819c912deb07db94469d128520d3cdf24622777e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.093642 kubelet[3150]: E0123 18:58:16.093551 3150 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb3e845fd13e93e2bf3775a819c912deb07db94469d128520d3cdf24622777e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7q979" Jan 23 18:58:16.093642 kubelet[3150]: E0123 18:58:16.093568 3150 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fb3e845fd13e93e2bf3775a819c912deb07db94469d128520d3cdf24622777e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-7q979" Jan 23 18:58:16.093642 kubelet[3150]: E0123 18:58:16.093623 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-7q979_kube-system(febc0b92-9a4e-4d79-856f-cbca4e1c9361)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-7q979_kube-system(febc0b92-9a4e-4d79-856f-cbca4e1c9361)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fb3e845fd13e93e2bf3775a819c912deb07db94469d128520d3cdf24622777e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-7q979" podUID="febc0b92-9a4e-4d79-856f-cbca4e1c9361" Jan 23 18:58:16.133440 containerd[1705]: time="2026-01-23T18:58:16.132517691Z" level=error msg="Failed to destroy network for sandbox \"09b8fc7dd71c55fee2a3752f807139461698d2589bddb314fae413c5528bc7e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.136197 containerd[1705]: time="2026-01-23T18:58:16.136147838Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w8lzq,Uid:ff2219dc-84d2-48b9-9cbc-2450678772ac,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b8fc7dd71c55fee2a3752f807139461698d2589bddb314fae413c5528bc7e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.136813 kubelet[3150]: E0123 18:58:16.136788 3150 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b8fc7dd71c55fee2a3752f807139461698d2589bddb314fae413c5528bc7e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.137175 kubelet[3150]: E0123 18:58:16.136839 3150 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b8fc7dd71c55fee2a3752f807139461698d2589bddb314fae413c5528bc7e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w8lzq" Jan 23 18:58:16.137175 kubelet[3150]: E0123 18:58:16.136858 3150 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09b8fc7dd71c55fee2a3752f807139461698d2589bddb314fae413c5528bc7e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w8lzq" Jan 23 18:58:16.137175 kubelet[3150]: E0123 18:58:16.136903 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w8lzq_calico-system(ff2219dc-84d2-48b9-9cbc-2450678772ac)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w8lzq_calico-system(ff2219dc-84d2-48b9-9cbc-2450678772ac)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09b8fc7dd71c55fee2a3752f807139461698d2589bddb314fae413c5528bc7e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:58:16.138333 containerd[1705]: time="2026-01-23T18:58:16.138304721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 18:58:16.169567 containerd[1705]: time="2026-01-23T18:58:16.169541306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6498b48f-qmxqv,Uid:5141cf66-e083-4b00-b023-85272b62f472,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:16.213123 containerd[1705]: time="2026-01-23T18:58:16.213090236Z" level=error msg="Failed to destroy network for sandbox \"eebde60e0eb5bbd7a8a03498618fe22ce91c1d1f9a48a7d24bdca07d1f8e8ee4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.216975 containerd[1705]: time="2026-01-23T18:58:16.216944957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6498b48f-qmxqv,Uid:5141cf66-e083-4b00-b023-85272b62f472,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eebde60e0eb5bbd7a8a03498618fe22ce91c1d1f9a48a7d24bdca07d1f8e8ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.219461 kubelet[3150]: E0123 18:58:16.217172 3150 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eebde60e0eb5bbd7a8a03498618fe22ce91c1d1f9a48a7d24bdca07d1f8e8ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.219461 kubelet[3150]: E0123 18:58:16.217204 3150 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eebde60e0eb5bbd7a8a03498618fe22ce91c1d1f9a48a7d24bdca07d1f8e8ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" Jan 23 18:58:16.219461 kubelet[3150]: E0123 18:58:16.217222 3150 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eebde60e0eb5bbd7a8a03498618fe22ce91c1d1f9a48a7d24bdca07d1f8e8ee4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" Jan 23 18:58:16.219554 kubelet[3150]: E0123 18:58:16.217257 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d6498b48f-qmxqv_calico-apiserver(5141cf66-e083-4b00-b023-85272b62f472)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d6498b48f-qmxqv_calico-apiserver(5141cf66-e083-4b00-b023-85272b62f472)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eebde60e0eb5bbd7a8a03498618fe22ce91c1d1f9a48a7d24bdca07d1f8e8ee4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 18:58:16.221754 containerd[1705]: time="2026-01-23T18:58:16.221730968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cghv8,Uid:7219b124-38e1-4c1e-9a91-813ecf13e9a5,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:16.236920 containerd[1705]: time="2026-01-23T18:58:16.236839178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-868995fc76-xb2lm,Uid:fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:16.241652 containerd[1705]: time="2026-01-23T18:58:16.241623130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67858cdf98-r62lm,Uid:5f08c0b6-1683-46fd-9e2c-63a438d878d9,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:16.251169 containerd[1705]: time="2026-01-23T18:58:16.251072859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6qgwf,Uid:41eb4aaa-d676-42e0-9ada-ef9e2ad0459b,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:16.268865 containerd[1705]: time="2026-01-23T18:58:16.268834963Z" level=error msg="Failed to destroy network for sandbox \"3eef63832cfd51fc0b88873037034b62b6582b6221aa2703bf5cd4e6249c0034\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.308840 containerd[1705]: time="2026-01-23T18:58:16.308744985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cghv8,Uid:7219b124-38e1-4c1e-9a91-813ecf13e9a5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eef63832cfd51fc0b88873037034b62b6582b6221aa2703bf5cd4e6249c0034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.309117 kubelet[3150]: E0123 18:58:16.309086 3150 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eef63832cfd51fc0b88873037034b62b6582b6221aa2703bf5cd4e6249c0034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.309348 kubelet[3150]: E0123 18:58:16.309300 3150 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eef63832cfd51fc0b88873037034b62b6582b6221aa2703bf5cd4e6249c0034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cghv8" Jan 23 18:58:16.309586 kubelet[3150]: E0123 18:58:16.309328 3150 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3eef63832cfd51fc0b88873037034b62b6582b6221aa2703bf5cd4e6249c0034\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-cghv8" Jan 23 18:58:16.309586 kubelet[3150]: E0123 18:58:16.309541 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-cghv8_kube-system(7219b124-38e1-4c1e-9a91-813ecf13e9a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-cghv8_kube-system(7219b124-38e1-4c1e-9a91-813ecf13e9a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3eef63832cfd51fc0b88873037034b62b6582b6221aa2703bf5cd4e6249c0034\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-cghv8" podUID="7219b124-38e1-4c1e-9a91-813ecf13e9a5" Jan 23 18:58:16.372089 containerd[1705]: time="2026-01-23T18:58:16.372006039Z" level=error msg="Failed to destroy network for sandbox \"36a99c862b82ae15bc8b04b3113c098b71731fe39a7f1dcec6178f1b7ed5d0b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.376301 containerd[1705]: time="2026-01-23T18:58:16.376233472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6qgwf,Uid:41eb4aaa-d676-42e0-9ada-ef9e2ad0459b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36a99c862b82ae15bc8b04b3113c098b71731fe39a7f1dcec6178f1b7ed5d0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.376638 kubelet[3150]: E0123 18:58:16.376585 3150 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36a99c862b82ae15bc8b04b3113c098b71731fe39a7f1dcec6178f1b7ed5d0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.376719 kubelet[3150]: E0123 18:58:16.376641 3150 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36a99c862b82ae15bc8b04b3113c098b71731fe39a7f1dcec6178f1b7ed5d0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-6qgwf" Jan 23 18:58:16.376719 kubelet[3150]: E0123 18:58:16.376661 3150 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36a99c862b82ae15bc8b04b3113c098b71731fe39a7f1dcec6178f1b7ed5d0b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-6qgwf" Jan 23 18:58:16.376800 kubelet[3150]: E0123 18:58:16.376729 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-6qgwf_calico-system(41eb4aaa-d676-42e0-9ada-ef9e2ad0459b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-6qgwf_calico-system(41eb4aaa-d676-42e0-9ada-ef9e2ad0459b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36a99c862b82ae15bc8b04b3113c098b71731fe39a7f1dcec6178f1b7ed5d0b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 18:58:16.380426 containerd[1705]: time="2026-01-23T18:58:16.380384857Z" level=error msg="Failed to destroy network for sandbox \"7f1e8a6cfa90702d8a90fd6d466fa64b618a399c11e460db69fa3bb1b06f8b48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.384471 containerd[1705]: time="2026-01-23T18:58:16.383869547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-868995fc76-xb2lm,Uid:fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f1e8a6cfa90702d8a90fd6d466fa64b618a399c11e460db69fa3bb1b06f8b48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.384569 kubelet[3150]: E0123 18:58:16.384545 3150 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f1e8a6cfa90702d8a90fd6d466fa64b618a399c11e460db69fa3bb1b06f8b48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.384610 kubelet[3150]: E0123 18:58:16.384585 3150 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f1e8a6cfa90702d8a90fd6d466fa64b618a399c11e460db69fa3bb1b06f8b48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-868995fc76-xb2lm" Jan 23 18:58:16.384635 kubelet[3150]: E0123 18:58:16.384606 3150 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f1e8a6cfa90702d8a90fd6d466fa64b618a399c11e460db69fa3bb1b06f8b48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-868995fc76-xb2lm" Jan 23 18:58:16.384716 kubelet[3150]: E0123 18:58:16.384654 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-868995fc76-xb2lm_calico-system(fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-868995fc76-xb2lm_calico-system(fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f1e8a6cfa90702d8a90fd6d466fa64b618a399c11e460db69fa3bb1b06f8b48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-868995fc76-xb2lm" podUID="fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f" Jan 23 18:58:16.387819 containerd[1705]: time="2026-01-23T18:58:16.387785054Z" level=error msg="Failed to destroy network for sandbox \"ce7fa13b1dd9792bbc62e55272cac4808b4bbc36d560b7cc3daa0bbcc8c4254c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.390653 containerd[1705]: time="2026-01-23T18:58:16.390619721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67858cdf98-r62lm,Uid:5f08c0b6-1683-46fd-9e2c-63a438d878d9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7fa13b1dd9792bbc62e55272cac4808b4bbc36d560b7cc3daa0bbcc8c4254c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.390999 kubelet[3150]: E0123 18:58:16.390972 3150 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7fa13b1dd9792bbc62e55272cac4808b4bbc36d560b7cc3daa0bbcc8c4254c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 18:58:16.391076 kubelet[3150]: E0123 18:58:16.391013 3150 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7fa13b1dd9792bbc62e55272cac4808b4bbc36d560b7cc3daa0bbcc8c4254c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" Jan 23 18:58:16.391076 kubelet[3150]: E0123 18:58:16.391031 3150 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce7fa13b1dd9792bbc62e55272cac4808b4bbc36d560b7cc3daa0bbcc8c4254c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" Jan 23 18:58:16.391153 kubelet[3150]: E0123 18:58:16.391083 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67858cdf98-r62lm_calico-apiserver(5f08c0b6-1683-46fd-9e2c-63a438d878d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67858cdf98-r62lm_calico-apiserver(5f08c0b6-1683-46fd-9e2c-63a438d878d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce7fa13b1dd9792bbc62e55272cac4808b4bbc36d560b7cc3daa0bbcc8c4254c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 18:58:17.017726 systemd[1]: run-netns-cni\x2dff3fb345\x2d72cd\x2d4d49\x2d66ee\x2d0739f98e2f32.mount: Deactivated successfully. Jan 23 18:58:17.017991 systemd[1]: run-netns-cni\x2d59525990\x2d1e39\x2d0912\x2df4ed\x2db84ce372318e.mount: Deactivated successfully. Jan 23 18:58:17.018100 systemd[1]: run-netns-cni\x2dac5e3f29\x2d35c7\x2d0b57\x2df6e1\x2d8f100520e3de.mount: Deactivated successfully. Jan 23 18:58:22.910633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3601594723.mount: Deactivated successfully. Jan 23 18:58:22.946070 containerd[1705]: time="2026-01-23T18:58:22.946028846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:22.949467 containerd[1705]: time="2026-01-23T18:58:22.949381319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 23 18:58:22.952931 containerd[1705]: time="2026-01-23T18:58:22.952902156Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:22.957093 containerd[1705]: time="2026-01-23T18:58:22.957048918Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 18:58:22.957670 containerd[1705]: time="2026-01-23T18:58:22.957409298Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.819070771s" Jan 23 18:58:22.957670 containerd[1705]: time="2026-01-23T18:58:22.957440002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 23 18:58:22.973912 containerd[1705]: time="2026-01-23T18:58:22.973886210Z" level=info msg="CreateContainer within sandbox \"a68d480287c5088609985e5c526b02772bb75a3cbbb4124bdf02de131d84dc16\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 18:58:22.997492 containerd[1705]: time="2026-01-23T18:58:22.996106236Z" level=info msg="Container 20717073d8c02579c67f06f08c468b67af2ccc78b5a0ec51334440581fa9eae2: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:23.017691 containerd[1705]: time="2026-01-23T18:58:23.015901218Z" level=info msg="CreateContainer within sandbox \"a68d480287c5088609985e5c526b02772bb75a3cbbb4124bdf02de131d84dc16\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"20717073d8c02579c67f06f08c468b67af2ccc78b5a0ec51334440581fa9eae2\"" Jan 23 18:58:23.019064 containerd[1705]: time="2026-01-23T18:58:23.019040893Z" level=info msg="StartContainer for \"20717073d8c02579c67f06f08c468b67af2ccc78b5a0ec51334440581fa9eae2\"" Jan 23 18:58:23.024290 containerd[1705]: time="2026-01-23T18:58:23.024253030Z" level=info msg="connecting to shim 20717073d8c02579c67f06f08c468b67af2ccc78b5a0ec51334440581fa9eae2" address="unix:///run/containerd/s/584b3c82caa423b15b981c99611e01f8ea095af82c7aba5dcd4e28ee5372a253" protocol=ttrpc version=3 Jan 23 18:58:23.042854 systemd[1]: Started cri-containerd-20717073d8c02579c67f06f08c468b67af2ccc78b5a0ec51334440581fa9eae2.scope - libcontainer container 20717073d8c02579c67f06f08c468b67af2ccc78b5a0ec51334440581fa9eae2. Jan 23 18:58:23.103356 containerd[1705]: time="2026-01-23T18:58:23.103323686Z" level=info msg="StartContainer for \"20717073d8c02579c67f06f08c468b67af2ccc78b5a0ec51334440581fa9eae2\" returns successfully" Jan 23 18:58:23.344912 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 18:58:23.345031 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 18:58:23.462084 kubelet[3150]: I0123 18:58:23.461382 3150 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8x4kv" podStartSLOduration=1.239185661 podStartE2EDuration="19.461362639s" podCreationTimestamp="2026-01-23 18:58:04 +0000 UTC" firstStartedPulling="2026-01-23 18:58:04.735909733 +0000 UTC m=+19.808410368" lastFinishedPulling="2026-01-23 18:58:22.958086727 +0000 UTC m=+38.030587346" observedRunningTime="2026-01-23 18:58:23.180775307 +0000 UTC m=+38.253275958" watchObservedRunningTime="2026-01-23 18:58:23.461362639 +0000 UTC m=+38.533863275" Jan 23 18:58:23.545715 kubelet[3150]: I0123 18:58:23.544918 3150 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-whisker-backend-key-pair\") pod \"fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f\" (UID: \"fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f\") " Jan 23 18:58:23.545715 kubelet[3150]: I0123 18:58:23.544958 3150 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cbgv\" (UniqueName: \"kubernetes.io/projected/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-kube-api-access-9cbgv\") pod \"fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f\" (UID: \"fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f\") " Jan 23 18:58:23.545715 kubelet[3150]: I0123 18:58:23.544987 3150 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-whisker-ca-bundle\") pod \"fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f\" (UID: \"fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f\") " Jan 23 18:58:23.545715 kubelet[3150]: I0123 18:58:23.545309 3150 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f" (UID: "fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 18:58:23.551288 kubelet[3150]: I0123 18:58:23.551250 3150 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-kube-api-access-9cbgv" (OuterVolumeSpecName: "kube-api-access-9cbgv") pod "fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f" (UID: "fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f"). InnerVolumeSpecName "kube-api-access-9cbgv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 18:58:23.553137 kubelet[3150]: I0123 18:58:23.553103 3150 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f" (UID: "fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 18:58:23.646023 kubelet[3150]: I0123 18:58:23.645602 3150 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-whisker-ca-bundle\") on node \"ci-4459.2.3-a-2aee31126f\" DevicePath \"\"" Jan 23 18:58:23.646226 kubelet[3150]: I0123 18:58:23.646161 3150 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-whisker-backend-key-pair\") on node \"ci-4459.2.3-a-2aee31126f\" DevicePath \"\"" Jan 23 18:58:23.646226 kubelet[3150]: I0123 18:58:23.646186 3150 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9cbgv\" (UniqueName: \"kubernetes.io/projected/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f-kube-api-access-9cbgv\") on node \"ci-4459.2.3-a-2aee31126f\" DevicePath \"\"" Jan 23 18:58:23.910536 systemd[1]: var-lib-kubelet-pods-fea0c615\x2dfd65\x2d4cb0\x2d8e2a\x2ddc4b472b0f8f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9cbgv.mount: Deactivated successfully. Jan 23 18:58:23.910823 systemd[1]: var-lib-kubelet-pods-fea0c615\x2dfd65\x2d4cb0\x2d8e2a\x2ddc4b472b0f8f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 18:58:24.163962 systemd[1]: Removed slice kubepods-besteffort-podfea0c615_fd65_4cb0_8e2a_dc4b472b0f8f.slice - libcontainer container kubepods-besteffort-podfea0c615_fd65_4cb0_8e2a_dc4b472b0f8f.slice. Jan 23 18:58:24.257489 systemd[1]: Created slice kubepods-besteffort-pod485a4546_1705_4b94_89b0_63e51dc5fdcb.slice - libcontainer container kubepods-besteffort-pod485a4546_1705_4b94_89b0_63e51dc5fdcb.slice. Jan 23 18:58:24.350019 kubelet[3150]: I0123 18:58:24.349975 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/485a4546-1705-4b94-89b0-63e51dc5fdcb-whisker-ca-bundle\") pod \"whisker-9ffdfbd54-jwxd4\" (UID: \"485a4546-1705-4b94-89b0-63e51dc5fdcb\") " pod="calico-system/whisker-9ffdfbd54-jwxd4" Jan 23 18:58:24.350019 kubelet[3150]: I0123 18:58:24.350017 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/485a4546-1705-4b94-89b0-63e51dc5fdcb-whisker-backend-key-pair\") pod \"whisker-9ffdfbd54-jwxd4\" (UID: \"485a4546-1705-4b94-89b0-63e51dc5fdcb\") " pod="calico-system/whisker-9ffdfbd54-jwxd4" Jan 23 18:58:24.350180 kubelet[3150]: I0123 18:58:24.350033 3150 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrpqj\" (UniqueName: \"kubernetes.io/projected/485a4546-1705-4b94-89b0-63e51dc5fdcb-kube-api-access-zrpqj\") pod \"whisker-9ffdfbd54-jwxd4\" (UID: \"485a4546-1705-4b94-89b0-63e51dc5fdcb\") " pod="calico-system/whisker-9ffdfbd54-jwxd4" Jan 23 18:58:24.568115 containerd[1705]: time="2026-01-23T18:58:24.567805916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9ffdfbd54-jwxd4,Uid:485a4546-1705-4b94-89b0-63e51dc5fdcb,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:24.689566 systemd-networkd[1345]: calia0cc2200c6b: Link UP Jan 23 18:58:24.689882 systemd-networkd[1345]: calia0cc2200c6b: Gained carrier Jan 23 18:58:24.703710 containerd[1705]: 2026-01-23 18:58:24.594 [INFO][4310] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:24.703710 containerd[1705]: 2026-01-23 18:58:24.602 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0 whisker-9ffdfbd54- calico-system 485a4546-1705-4b94-89b0-63e51dc5fdcb 897 0 2026-01-23 18:58:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:9ffdfbd54 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459.2.3-a-2aee31126f whisker-9ffdfbd54-jwxd4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia0cc2200c6b [] [] }} ContainerID="e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" Namespace="calico-system" Pod="whisker-9ffdfbd54-jwxd4" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-" Jan 23 18:58:24.703710 containerd[1705]: 2026-01-23 18:58:24.602 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" Namespace="calico-system" Pod="whisker-9ffdfbd54-jwxd4" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0" Jan 23 18:58:24.703710 containerd[1705]: 2026-01-23 18:58:24.623 [INFO][4324] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" HandleID="k8s-pod-network.e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" Workload="ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0" Jan 23 18:58:24.703941 containerd[1705]: 2026-01-23 18:58:24.623 [INFO][4324] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" HandleID="k8s-pod-network.e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" Workload="ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f160), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-a-2aee31126f", "pod":"whisker-9ffdfbd54-jwxd4", "timestamp":"2026-01-23 18:58:24.623008063 +0000 UTC"}, Hostname:"ci-4459.2.3-a-2aee31126f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:24.703941 containerd[1705]: 2026-01-23 18:58:24.623 [INFO][4324] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:24.703941 containerd[1705]: 2026-01-23 18:58:24.623 [INFO][4324] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:24.703941 containerd[1705]: 2026-01-23 18:58:24.623 [INFO][4324] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-a-2aee31126f' Jan 23 18:58:24.703941 containerd[1705]: 2026-01-23 18:58:24.628 [INFO][4324] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:24.703941 containerd[1705]: 2026-01-23 18:58:24.631 [INFO][4324] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:24.703941 containerd[1705]: 2026-01-23 18:58:24.634 [INFO][4324] ipam/ipam.go 511: Trying affinity for 192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:24.703941 containerd[1705]: 2026-01-23 18:58:24.635 [INFO][4324] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:24.703941 containerd[1705]: 2026-01-23 18:58:24.637 [INFO][4324] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:24.704280 containerd[1705]: 2026-01-23 18:58:24.637 [INFO][4324] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.128/26 handle="k8s-pod-network.e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:24.704280 containerd[1705]: 2026-01-23 18:58:24.638 [INFO][4324] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404 Jan 23 18:58:24.704280 containerd[1705]: 2026-01-23 18:58:24.645 [INFO][4324] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.128/26 handle="k8s-pod-network.e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:24.704280 containerd[1705]: 2026-01-23 18:58:24.650 [INFO][4324] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.129/26] block=192.168.85.128/26 handle="k8s-pod-network.e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:24.704280 containerd[1705]: 2026-01-23 18:58:24.651 [INFO][4324] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.129/26] handle="k8s-pod-network.e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:24.704280 containerd[1705]: 2026-01-23 18:58:24.651 [INFO][4324] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:24.704280 containerd[1705]: 2026-01-23 18:58:24.651 [INFO][4324] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.129/26] IPv6=[] ContainerID="e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" HandleID="k8s-pod-network.e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" Workload="ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0" Jan 23 18:58:24.704454 containerd[1705]: 2026-01-23 18:58:24.654 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" Namespace="calico-system" Pod="whisker-9ffdfbd54-jwxd4" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0", GenerateName:"whisker-9ffdfbd54-", Namespace:"calico-system", SelfLink:"", UID:"485a4546-1705-4b94-89b0-63e51dc5fdcb", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9ffdfbd54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"", Pod:"whisker-9ffdfbd54-jwxd4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.85.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia0cc2200c6b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:24.704454 containerd[1705]: 2026-01-23 18:58:24.654 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.129/32] ContainerID="e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" Namespace="calico-system" Pod="whisker-9ffdfbd54-jwxd4" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0" Jan 23 18:58:24.704584 containerd[1705]: 2026-01-23 18:58:24.654 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0cc2200c6b ContainerID="e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" Namespace="calico-system" Pod="whisker-9ffdfbd54-jwxd4" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0" Jan 23 18:58:24.704584 containerd[1705]: 2026-01-23 18:58:24.689 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" Namespace="calico-system" Pod="whisker-9ffdfbd54-jwxd4" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0" Jan 23 18:58:24.704639 containerd[1705]: 2026-01-23 18:58:24.690 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" Namespace="calico-system" Pod="whisker-9ffdfbd54-jwxd4" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0", GenerateName:"whisker-9ffdfbd54-", Namespace:"calico-system", SelfLink:"", UID:"485a4546-1705-4b94-89b0-63e51dc5fdcb", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"9ffdfbd54", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404", Pod:"whisker-9ffdfbd54-jwxd4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.85.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia0cc2200c6b", MAC:"1e:8a:6b:02:ae:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:24.704925 containerd[1705]: 2026-01-23 18:58:24.700 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" Namespace="calico-system" Pod="whisker-9ffdfbd54-jwxd4" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-whisker--9ffdfbd54--jwxd4-eth0" Jan 23 18:58:24.811194 containerd[1705]: time="2026-01-23T18:58:24.811138902Z" level=info msg="connecting to shim e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404" address="unix:///run/containerd/s/bd4f7077ca5baeb506fd62925c5f2910cc99de3e957e8f0cfb0704532aafd060" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:24.854150 systemd[1]: Started cri-containerd-e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404.scope - libcontainer container e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404. Jan 23 18:58:24.986515 containerd[1705]: time="2026-01-23T18:58:24.986460167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-9ffdfbd54-jwxd4,Uid:485a4546-1705-4b94-89b0-63e51dc5fdcb,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7fc6e9ec931a55aa21b0540a2734274887bfd8803692539c943fb841bdda404\"" Jan 23 18:58:24.989402 containerd[1705]: time="2026-01-23T18:58:24.988661382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:58:25.027570 kubelet[3150]: I0123 18:58:25.027467 3150 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f" path="/var/lib/kubelet/pods/fea0c615-fd65-4cb0-8e2a-dc4b472b0f8f/volumes" Jan 23 18:58:25.250109 containerd[1705]: time="2026-01-23T18:58:25.250074227Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:25.255321 containerd[1705]: time="2026-01-23T18:58:25.255295095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:58:25.255379 containerd[1705]: time="2026-01-23T18:58:25.255365683Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:58:25.255553 kubelet[3150]: E0123 18:58:25.255523 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:25.255620 kubelet[3150]: E0123 18:58:25.255566 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:25.255696 kubelet[3150]: E0123 18:58:25.255654 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9ffdfbd54-jwxd4_calico-system(485a4546-1705-4b94-89b0-63e51dc5fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:25.257007 containerd[1705]: time="2026-01-23T18:58:25.256969621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:58:25.516620 containerd[1705]: time="2026-01-23T18:58:25.516486219Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:25.519465 containerd[1705]: time="2026-01-23T18:58:25.519425909Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:58:25.519557 containerd[1705]: time="2026-01-23T18:58:25.519444554Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:58:25.519774 kubelet[3150]: E0123 18:58:25.519699 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:25.519774 kubelet[3150]: E0123 18:58:25.519743 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:25.519956 kubelet[3150]: E0123 18:58:25.519828 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9ffdfbd54-jwxd4_calico-system(485a4546-1705-4b94-89b0-63e51dc5fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:25.519956 kubelet[3150]: E0123 18:58:25.519875 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 18:58:25.976914 systemd-networkd[1345]: calia0cc2200c6b: Gained IPv6LL Jan 23 18:58:26.167707 kubelet[3150]: E0123 18:58:26.167120 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 18:58:27.029463 containerd[1705]: time="2026-01-23T18:58:27.029414249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6498b48f-qmxqv,Uid:5141cf66-e083-4b00-b023-85272b62f472,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:27.040705 containerd[1705]: time="2026-01-23T18:58:27.040486275Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w8lzq,Uid:ff2219dc-84d2-48b9-9cbc-2450678772ac,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:27.245110 systemd-networkd[1345]: cali0ba632161e8: Link UP Jan 23 18:58:27.247780 systemd-networkd[1345]: cali0ba632161e8: Gained carrier Jan 23 18:58:27.271516 containerd[1705]: 2026-01-23 18:58:27.111 [INFO][4519] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:27.271516 containerd[1705]: 2026-01-23 18:58:27.124 [INFO][4519] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0 csi-node-driver- calico-system ff2219dc-84d2-48b9-9cbc-2450678772ac 712 0 2026-01-23 18:58:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459.2.3-a-2aee31126f csi-node-driver-w8lzq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0ba632161e8 [] [] }} ContainerID="ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" Namespace="calico-system" Pod="csi-node-driver-w8lzq" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-" Jan 23 18:58:27.271516 containerd[1705]: 2026-01-23 18:58:27.124 [INFO][4519] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" Namespace="calico-system" Pod="csi-node-driver-w8lzq" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0" Jan 23 18:58:27.271516 containerd[1705]: 2026-01-23 18:58:27.183 [INFO][4545] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" HandleID="k8s-pod-network.ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" Workload="ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0" Jan 23 18:58:27.272015 containerd[1705]: 2026-01-23 18:58:27.184 [INFO][4545] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" HandleID="k8s-pod-network.ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" Workload="ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf910), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-a-2aee31126f", "pod":"csi-node-driver-w8lzq", "timestamp":"2026-01-23 18:58:27.183187132 +0000 UTC"}, Hostname:"ci-4459.2.3-a-2aee31126f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:27.272015 containerd[1705]: 2026-01-23 18:58:27.184 [INFO][4545] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:27.272015 containerd[1705]: 2026-01-23 18:58:27.184 [INFO][4545] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:27.272015 containerd[1705]: 2026-01-23 18:58:27.184 [INFO][4545] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-a-2aee31126f' Jan 23 18:58:27.272015 containerd[1705]: 2026-01-23 18:58:27.191 [INFO][4545] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.272015 containerd[1705]: 2026-01-23 18:58:27.195 [INFO][4545] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.272015 containerd[1705]: 2026-01-23 18:58:27.205 [INFO][4545] ipam/ipam.go 511: Trying affinity for 192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.272015 containerd[1705]: 2026-01-23 18:58:27.211 [INFO][4545] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.272015 containerd[1705]: 2026-01-23 18:58:27.213 [INFO][4545] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.272319 containerd[1705]: 2026-01-23 18:58:27.213 [INFO][4545] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.128/26 handle="k8s-pod-network.ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.272319 containerd[1705]: 2026-01-23 18:58:27.214 [INFO][4545] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1 Jan 23 18:58:27.272319 containerd[1705]: 2026-01-23 18:58:27.220 [INFO][4545] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.128/26 handle="k8s-pod-network.ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.272319 containerd[1705]: 2026-01-23 18:58:27.229 [INFO][4545] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.130/26] block=192.168.85.128/26 handle="k8s-pod-network.ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.272319 containerd[1705]: 2026-01-23 18:58:27.229 [INFO][4545] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.130/26] handle="k8s-pod-network.ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.272319 containerd[1705]: 2026-01-23 18:58:27.229 [INFO][4545] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:27.272319 containerd[1705]: 2026-01-23 18:58:27.229 [INFO][4545] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.130/26] IPv6=[] ContainerID="ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" HandleID="k8s-pod-network.ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" Workload="ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0" Jan 23 18:58:27.272471 containerd[1705]: 2026-01-23 18:58:27.233 [INFO][4519] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" Namespace="calico-system" Pod="csi-node-driver-w8lzq" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ff2219dc-84d2-48b9-9cbc-2450678772ac", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"", Pod:"csi-node-driver-w8lzq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ba632161e8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:27.272536 containerd[1705]: 2026-01-23 18:58:27.233 [INFO][4519] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.130/32] ContainerID="ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" Namespace="calico-system" Pod="csi-node-driver-w8lzq" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0" Jan 23 18:58:27.272536 containerd[1705]: 2026-01-23 18:58:27.233 [INFO][4519] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0ba632161e8 ContainerID="ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" Namespace="calico-system" Pod="csi-node-driver-w8lzq" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0" Jan 23 18:58:27.272536 containerd[1705]: 2026-01-23 18:58:27.248 [INFO][4519] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" Namespace="calico-system" Pod="csi-node-driver-w8lzq" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0" Jan 23 18:58:27.272601 containerd[1705]: 2026-01-23 18:58:27.250 [INFO][4519] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" Namespace="calico-system" Pod="csi-node-driver-w8lzq" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ff2219dc-84d2-48b9-9cbc-2450678772ac", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1", Pod:"csi-node-driver-w8lzq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.85.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0ba632161e8", MAC:"1a:2d:11:eb:4a:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:27.272658 containerd[1705]: 2026-01-23 18:58:27.269 [INFO][4519] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" Namespace="calico-system" Pod="csi-node-driver-w8lzq" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-csi--node--driver--w8lzq-eth0" Jan 23 18:58:27.322571 containerd[1705]: time="2026-01-23T18:58:27.322489495Z" level=info msg="connecting to shim ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1" address="unix:///run/containerd/s/1a3c76f0aad928249f9410b13a3e2738d7bb3158a57d8fb0032b05d8652df73e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:27.338498 systemd-networkd[1345]: calibc3a741b9c2: Link UP Jan 23 18:58:27.348545 systemd-networkd[1345]: calibc3a741b9c2: Gained carrier Jan 23 18:58:27.350838 systemd[1]: Started cri-containerd-ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1.scope - libcontainer container ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1. Jan 23 18:58:27.365262 containerd[1705]: 2026-01-23 18:58:27.094 [INFO][4506] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:27.365262 containerd[1705]: 2026-01-23 18:58:27.119 [INFO][4506] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0 calico-apiserver-d6498b48f- calico-apiserver 5141cf66-e083-4b00-b023-85272b62f472 825 0 2026-01-23 18:58:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d6498b48f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.3-a-2aee31126f calico-apiserver-d6498b48f-qmxqv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibc3a741b9c2 [] [] }} ContainerID="27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-qmxqv" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-" Jan 23 18:58:27.365262 containerd[1705]: 2026-01-23 18:58:27.119 [INFO][4506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-qmxqv" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0" Jan 23 18:58:27.365262 containerd[1705]: 2026-01-23 18:58:27.187 [INFO][4542] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" HandleID="k8s-pod-network.27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0" Jan 23 18:58:27.365483 containerd[1705]: 2026-01-23 18:58:27.187 [INFO][4542] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" HandleID="k8s-pod-network.27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5030), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.3-a-2aee31126f", "pod":"calico-apiserver-d6498b48f-qmxqv", "timestamp":"2026-01-23 18:58:27.187002312 +0000 UTC"}, Hostname:"ci-4459.2.3-a-2aee31126f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:27.365483 containerd[1705]: 2026-01-23 18:58:27.187 [INFO][4542] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:27.365483 containerd[1705]: 2026-01-23 18:58:27.229 [INFO][4542] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:27.365483 containerd[1705]: 2026-01-23 18:58:27.229 [INFO][4542] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-a-2aee31126f' Jan 23 18:58:27.365483 containerd[1705]: 2026-01-23 18:58:27.292 [INFO][4542] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.365483 containerd[1705]: 2026-01-23 18:58:27.297 [INFO][4542] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.365483 containerd[1705]: 2026-01-23 18:58:27.300 [INFO][4542] ipam/ipam.go 511: Trying affinity for 192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.365483 containerd[1705]: 2026-01-23 18:58:27.302 [INFO][4542] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.365483 containerd[1705]: 2026-01-23 18:58:27.304 [INFO][4542] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.366484 containerd[1705]: 2026-01-23 18:58:27.304 [INFO][4542] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.128/26 handle="k8s-pod-network.27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.366484 containerd[1705]: 2026-01-23 18:58:27.305 [INFO][4542] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82 Jan 23 18:58:27.366484 containerd[1705]: 2026-01-23 18:58:27.309 [INFO][4542] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.128/26 handle="k8s-pod-network.27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.366484 containerd[1705]: 2026-01-23 18:58:27.326 [INFO][4542] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.131/26] block=192.168.85.128/26 handle="k8s-pod-network.27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.366484 containerd[1705]: 2026-01-23 18:58:27.326 [INFO][4542] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.131/26] handle="k8s-pod-network.27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:27.366484 containerd[1705]: 2026-01-23 18:58:27.326 [INFO][4542] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:27.366484 containerd[1705]: 2026-01-23 18:58:27.326 [INFO][4542] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.131/26] IPv6=[] ContainerID="27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" HandleID="k8s-pod-network.27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0" Jan 23 18:58:27.366796 containerd[1705]: 2026-01-23 18:58:27.334 [INFO][4506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-qmxqv" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0", GenerateName:"calico-apiserver-d6498b48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5141cf66-e083-4b00-b023-85272b62f472", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6498b48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"", Pod:"calico-apiserver-d6498b48f-qmxqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc3a741b9c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:27.366872 containerd[1705]: 2026-01-23 18:58:27.334 [INFO][4506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.131/32] ContainerID="27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-qmxqv" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0" Jan 23 18:58:27.366872 containerd[1705]: 2026-01-23 18:58:27.334 [INFO][4506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibc3a741b9c2 ContainerID="27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-qmxqv" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0" Jan 23 18:58:27.366872 containerd[1705]: 2026-01-23 18:58:27.348 [INFO][4506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-qmxqv" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0" Jan 23 18:58:27.366945 containerd[1705]: 2026-01-23 18:58:27.349 [INFO][4506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-qmxqv" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0", GenerateName:"calico-apiserver-d6498b48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"5141cf66-e083-4b00-b023-85272b62f472", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6498b48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82", Pod:"calico-apiserver-d6498b48f-qmxqv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibc3a741b9c2", MAC:"e2:aa:a0:69:4a:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:27.367011 containerd[1705]: 2026-01-23 18:58:27.363 [INFO][4506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-qmxqv" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--qmxqv-eth0" Jan 23 18:58:27.391536 containerd[1705]: time="2026-01-23T18:58:27.391507059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w8lzq,Uid:ff2219dc-84d2-48b9-9cbc-2450678772ac,Namespace:calico-system,Attempt:0,} returns sandbox id \"ed083778f68b30cfb5bd12ea8ed51427b69fd264ddbcd9410dcbc86a2f629de1\"" Jan 23 18:58:27.393147 containerd[1705]: time="2026-01-23T18:58:27.392987056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:58:27.435086 containerd[1705]: time="2026-01-23T18:58:27.435052735Z" level=info msg="connecting to shim 27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82" address="unix:///run/containerd/s/0a4e9e3f5efa8a2accc9202b673f27e4acbd5270d526f5147d50a8f387912806" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:27.459810 systemd[1]: Started cri-containerd-27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82.scope - libcontainer container 27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82. Jan 23 18:58:27.500612 containerd[1705]: time="2026-01-23T18:58:27.500534567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6498b48f-qmxqv,Uid:5141cf66-e083-4b00-b023-85272b62f472,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"27d05cc33ab1e02753598adcbf32e04128dbd90097adb8013803eefa95eafc82\"" Jan 23 18:58:27.670079 containerd[1705]: time="2026-01-23T18:58:27.669426929Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:27.673612 containerd[1705]: time="2026-01-23T18:58:27.673546313Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:58:27.673612 containerd[1705]: time="2026-01-23T18:58:27.673588591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:58:27.674040 kubelet[3150]: E0123 18:58:27.673816 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:27.674040 kubelet[3150]: E0123 18:58:27.673856 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:27.674040 kubelet[3150]: E0123 18:58:27.674006 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-w8lzq_calico-system(ff2219dc-84d2-48b9-9cbc-2450678772ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:27.675255 containerd[1705]: time="2026-01-23T18:58:27.675042751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:27.993015 containerd[1705]: time="2026-01-23T18:58:27.992959343Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:27.995775 containerd[1705]: time="2026-01-23T18:58:27.995742544Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:27.995854 containerd[1705]: time="2026-01-23T18:58:27.995759686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:27.996019 kubelet[3150]: E0123 18:58:27.995965 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:27.996075 kubelet[3150]: E0123 18:58:27.996031 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:27.996266 kubelet[3150]: E0123 18:58:27.996223 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d6498b48f-qmxqv_calico-apiserver(5141cf66-e083-4b00-b023-85272b62f472): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:27.996305 kubelet[3150]: E0123 18:58:27.996260 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 18:58:27.996534 containerd[1705]: time="2026-01-23T18:58:27.996517658Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:58:28.026340 containerd[1705]: time="2026-01-23T18:58:28.026304095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cghv8,Uid:7219b124-38e1-4c1e-9a91-813ecf13e9a5,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:28.149583 systemd-networkd[1345]: cali616ffe50c98: Link UP Jan 23 18:58:28.150238 systemd-networkd[1345]: cali616ffe50c98: Gained carrier Jan 23 18:58:28.164434 containerd[1705]: 2026-01-23 18:58:28.064 [INFO][4669] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:28.164434 containerd[1705]: 2026-01-23 18:58:28.073 [INFO][4669] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0 coredns-66bc5c9577- kube-system 7219b124-38e1-4c1e-9a91-813ecf13e9a5 826 0 2026-01-23 18:57:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.3-a-2aee31126f coredns-66bc5c9577-cghv8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali616ffe50c98 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" Namespace="kube-system" Pod="coredns-66bc5c9577-cghv8" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-" Jan 23 18:58:28.164434 containerd[1705]: 2026-01-23 18:58:28.073 [INFO][4669] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" Namespace="kube-system" Pod="coredns-66bc5c9577-cghv8" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0" Jan 23 18:58:28.164434 containerd[1705]: 2026-01-23 18:58:28.103 [INFO][4686] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" HandleID="k8s-pod-network.a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" Workload="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0" Jan 23 18:58:28.165525 containerd[1705]: 2026-01-23 18:58:28.104 [INFO][4686] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" HandleID="k8s-pod-network.a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" Workload="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.3-a-2aee31126f", "pod":"coredns-66bc5c9577-cghv8", "timestamp":"2026-01-23 18:58:28.103981084 +0000 UTC"}, Hostname:"ci-4459.2.3-a-2aee31126f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:28.165525 containerd[1705]: 2026-01-23 18:58:28.104 [INFO][4686] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:28.165525 containerd[1705]: 2026-01-23 18:58:28.104 [INFO][4686] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:28.165525 containerd[1705]: 2026-01-23 18:58:28.104 [INFO][4686] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-a-2aee31126f' Jan 23 18:58:28.165525 containerd[1705]: 2026-01-23 18:58:28.110 [INFO][4686] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:28.165525 containerd[1705]: 2026-01-23 18:58:28.115 [INFO][4686] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:28.165525 containerd[1705]: 2026-01-23 18:58:28.121 [INFO][4686] ipam/ipam.go 511: Trying affinity for 192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:28.165525 containerd[1705]: 2026-01-23 18:58:28.123 [INFO][4686] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:28.165525 containerd[1705]: 2026-01-23 18:58:28.125 [INFO][4686] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:28.165673 containerd[1705]: 2026-01-23 18:58:28.125 [INFO][4686] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.128/26 handle="k8s-pod-network.a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:28.165673 containerd[1705]: 2026-01-23 18:58:28.128 [INFO][4686] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a Jan 23 18:58:28.165673 containerd[1705]: 2026-01-23 18:58:28.134 [INFO][4686] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.128/26 handle="k8s-pod-network.a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:28.165673 containerd[1705]: 2026-01-23 18:58:28.143 [INFO][4686] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.132/26] block=192.168.85.128/26 handle="k8s-pod-network.a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:28.165673 containerd[1705]: 2026-01-23 18:58:28.144 [INFO][4686] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.132/26] handle="k8s-pod-network.a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:28.165673 containerd[1705]: 2026-01-23 18:58:28.144 [INFO][4686] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:28.165673 containerd[1705]: 2026-01-23 18:58:28.144 [INFO][4686] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.132/26] IPv6=[] ContainerID="a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" HandleID="k8s-pod-network.a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" Workload="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0" Jan 23 18:58:28.165848 containerd[1705]: 2026-01-23 18:58:28.146 [INFO][4669] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" Namespace="kube-system" Pod="coredns-66bc5c9577-cghv8" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7219b124-38e1-4c1e-9a91-813ecf13e9a5", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"", Pod:"coredns-66bc5c9577-cghv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali616ffe50c98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:28.165848 containerd[1705]: 2026-01-23 18:58:28.147 [INFO][4669] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.132/32] ContainerID="a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" Namespace="kube-system" Pod="coredns-66bc5c9577-cghv8" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0" Jan 23 18:58:28.165848 containerd[1705]: 2026-01-23 18:58:28.147 [INFO][4669] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali616ffe50c98 ContainerID="a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" Namespace="kube-system" Pod="coredns-66bc5c9577-cghv8" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0" Jan 23 18:58:28.165848 containerd[1705]: 2026-01-23 18:58:28.149 [INFO][4669] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" Namespace="kube-system" Pod="coredns-66bc5c9577-cghv8" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0" Jan 23 18:58:28.165848 containerd[1705]: 2026-01-23 18:58:28.151 [INFO][4669] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" Namespace="kube-system" Pod="coredns-66bc5c9577-cghv8" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"7219b124-38e1-4c1e-9a91-813ecf13e9a5", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a", Pod:"coredns-66bc5c9577-cghv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali616ffe50c98", MAC:"ca:8f:6a:29:d0:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:28.166055 containerd[1705]: 2026-01-23 18:58:28.162 [INFO][4669] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" Namespace="kube-system" Pod="coredns-66bc5c9577-cghv8" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--cghv8-eth0" Jan 23 18:58:28.172033 kubelet[3150]: E0123 18:58:28.171153 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 18:58:28.218579 containerd[1705]: time="2026-01-23T18:58:28.218038910Z" level=info msg="connecting to shim a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a" address="unix:///run/containerd/s/d2cf5732446a3a69f3a03e16e772f0124f8eea6160bf5ac0d4eb14c0f4ba528d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:28.241824 systemd[1]: Started cri-containerd-a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a.scope - libcontainer container a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a. Jan 23 18:58:28.268041 containerd[1705]: time="2026-01-23T18:58:28.268007457Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:28.271830 containerd[1705]: time="2026-01-23T18:58:28.271791392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:58:28.271917 containerd[1705]: time="2026-01-23T18:58:28.271875615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:58:28.272181 kubelet[3150]: E0123 18:58:28.272148 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:28.272496 kubelet[3150]: E0123 18:58:28.272282 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:28.272851 kubelet[3150]: E0123 18:58:28.272779 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-w8lzq_calico-system(ff2219dc-84d2-48b9-9cbc-2450678772ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:28.273191 kubelet[3150]: E0123 18:58:28.272837 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:58:28.287656 containerd[1705]: time="2026-01-23T18:58:28.287626433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-cghv8,Uid:7219b124-38e1-4c1e-9a91-813ecf13e9a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a\"" Jan 23 18:58:28.296616 containerd[1705]: time="2026-01-23T18:58:28.296584469Z" level=info msg="CreateContainer within sandbox \"a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:58:28.330703 containerd[1705]: time="2026-01-23T18:58:28.327223471Z" level=info msg="Container 778e6d9fcc92e650d34f0721cb0fadbca51160269dc44b068b6cdbef01390131: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:28.338111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776459076.mount: Deactivated successfully. Jan 23 18:58:28.347872 containerd[1705]: time="2026-01-23T18:58:28.347841747Z" level=info msg="CreateContainer within sandbox \"a414ce7312ff37d72b6e81fe5155a0468eef25df9615609c5ed9ac06930e004a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"778e6d9fcc92e650d34f0721cb0fadbca51160269dc44b068b6cdbef01390131\"" Jan 23 18:58:28.348385 containerd[1705]: time="2026-01-23T18:58:28.348363172Z" level=info msg="StartContainer for \"778e6d9fcc92e650d34f0721cb0fadbca51160269dc44b068b6cdbef01390131\"" Jan 23 18:58:28.349273 containerd[1705]: time="2026-01-23T18:58:28.349235959Z" level=info msg="connecting to shim 778e6d9fcc92e650d34f0721cb0fadbca51160269dc44b068b6cdbef01390131" address="unix:///run/containerd/s/d2cf5732446a3a69f3a03e16e772f0124f8eea6160bf5ac0d4eb14c0f4ba528d" protocol=ttrpc version=3 Jan 23 18:58:28.375085 systemd[1]: Started cri-containerd-778e6d9fcc92e650d34f0721cb0fadbca51160269dc44b068b6cdbef01390131.scope - libcontainer container 778e6d9fcc92e650d34f0721cb0fadbca51160269dc44b068b6cdbef01390131. Jan 23 18:58:28.405705 containerd[1705]: time="2026-01-23T18:58:28.405667456Z" level=info msg="StartContainer for \"778e6d9fcc92e650d34f0721cb0fadbca51160269dc44b068b6cdbef01390131\" returns successfully" Jan 23 18:58:28.920824 systemd-networkd[1345]: cali0ba632161e8: Gained IPv6LL Jan 23 18:58:29.027334 containerd[1705]: time="2026-01-23T18:58:29.027299071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8684856889-72bpc,Uid:55e05950-1192-4962-a3ec-2be48b4bca2a,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:29.032878 containerd[1705]: time="2026-01-23T18:58:29.032842786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6498b48f-btb4t,Uid:3cd71195-3bad-4889-98d9-2a171485ef11,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:29.163250 systemd-networkd[1345]: cali76746ff362b: Link UP Jan 23 18:58:29.164356 systemd-networkd[1345]: cali76746ff362b: Gained carrier Jan 23 18:58:29.186266 kubelet[3150]: E0123 18:58:29.185747 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.074 [INFO][4795] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.085 [INFO][4795] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0 calico-kube-controllers-8684856889- calico-system 55e05950-1192-4962-a3ec-2be48b4bca2a 819 0 2026-01-23 18:58:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8684856889 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459.2.3-a-2aee31126f calico-kube-controllers-8684856889-72bpc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali76746ff362b [] [] }} ContainerID="6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" Namespace="calico-system" Pod="calico-kube-controllers-8684856889-72bpc" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.085 [INFO][4795] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" Namespace="calico-system" Pod="calico-kube-controllers-8684856889-72bpc" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.118 [INFO][4821] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" HandleID="k8s-pod-network.6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.118 [INFO][4821] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" HandleID="k8s-pod-network.6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f260), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-a-2aee31126f", "pod":"calico-kube-controllers-8684856889-72bpc", "timestamp":"2026-01-23 18:58:29.118159035 +0000 UTC"}, Hostname:"ci-4459.2.3-a-2aee31126f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.118 [INFO][4821] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.118 [INFO][4821] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.118 [INFO][4821] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-a-2aee31126f' Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.124 [INFO][4821] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.129 [INFO][4821] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.132 [INFO][4821] ipam/ipam.go 511: Trying affinity for 192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.133 [INFO][4821] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.135 [INFO][4821] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.135 [INFO][4821] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.128/26 handle="k8s-pod-network.6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.136 [INFO][4821] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73 Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.141 [INFO][4821] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.128/26 handle="k8s-pod-network.6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.155 [INFO][4821] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.133/26] block=192.168.85.128/26 handle="k8s-pod-network.6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.155 [INFO][4821] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.133/26] handle="k8s-pod-network.6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.155 [INFO][4821] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:29.194376 containerd[1705]: 2026-01-23 18:58:29.156 [INFO][4821] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.133/26] IPv6=[] ContainerID="6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" HandleID="k8s-pod-network.6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0" Jan 23 18:58:29.195108 kubelet[3150]: E0123 18:58:29.193671 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:58:29.195172 containerd[1705]: 2026-01-23 18:58:29.159 [INFO][4795] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" Namespace="calico-system" Pod="calico-kube-controllers-8684856889-72bpc" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0", GenerateName:"calico-kube-controllers-8684856889-", Namespace:"calico-system", SelfLink:"", UID:"55e05950-1192-4962-a3ec-2be48b4bca2a", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8684856889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"", Pod:"calico-kube-controllers-8684856889-72bpc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali76746ff362b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:29.195172 containerd[1705]: 2026-01-23 18:58:29.159 [INFO][4795] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.133/32] ContainerID="6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" Namespace="calico-system" Pod="calico-kube-controllers-8684856889-72bpc" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0" Jan 23 18:58:29.195172 containerd[1705]: 2026-01-23 18:58:29.159 [INFO][4795] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali76746ff362b ContainerID="6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" Namespace="calico-system" Pod="calico-kube-controllers-8684856889-72bpc" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0" Jan 23 18:58:29.195172 containerd[1705]: 2026-01-23 18:58:29.161 [INFO][4795] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" Namespace="calico-system" Pod="calico-kube-controllers-8684856889-72bpc" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0" Jan 23 18:58:29.195172 containerd[1705]: 2026-01-23 18:58:29.161 [INFO][4795] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" Namespace="calico-system" Pod="calico-kube-controllers-8684856889-72bpc" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0", GenerateName:"calico-kube-controllers-8684856889-", Namespace:"calico-system", SelfLink:"", UID:"55e05950-1192-4962-a3ec-2be48b4bca2a", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8684856889", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73", Pod:"calico-kube-controllers-8684856889-72bpc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.85.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali76746ff362b", MAC:"4e:c4:72:9e:3c:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:29.195172 containerd[1705]: 2026-01-23 18:58:29.179 [INFO][4795] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" Namespace="calico-system" Pod="calico-kube-controllers-8684856889-72bpc" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--kube--controllers--8684856889--72bpc-eth0" Jan 23 18:58:29.258180 kubelet[3150]: I0123 18:58:29.257893 3150 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cghv8" podStartSLOduration=39.257876531 podStartE2EDuration="39.257876531s" podCreationTimestamp="2026-01-23 18:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:58:29.236407258 +0000 UTC m=+44.308907892" watchObservedRunningTime="2026-01-23 18:58:29.257876531 +0000 UTC m=+44.330377281" Jan 23 18:58:29.265985 containerd[1705]: time="2026-01-23T18:58:29.265950517Z" level=info msg="connecting to shim 6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73" address="unix:///run/containerd/s/4cc88107a537c16498daa09052cf6674f131366009c9134f113dc3bc5bef2329" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:29.302141 systemd[1]: Started cri-containerd-6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73.scope - libcontainer container 6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73. Jan 23 18:58:29.322231 systemd-networkd[1345]: cali66dcd2b3587: Link UP Jan 23 18:58:29.323197 systemd-networkd[1345]: cali66dcd2b3587: Gained carrier Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.088 [INFO][4806] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.099 [INFO][4806] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0 calico-apiserver-d6498b48f- calico-apiserver 3cd71195-3bad-4889-98d9-2a171485ef11 822 0 2026-01-23 18:58:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d6498b48f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.3-a-2aee31126f calico-apiserver-d6498b48f-btb4t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali66dcd2b3587 [] [] }} ContainerID="e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-btb4t" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.100 [INFO][4806] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-btb4t" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.131 [INFO][4828] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" HandleID="k8s-pod-network.e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.131 [INFO][4828] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" HandleID="k8s-pod-network.e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032d3b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.3-a-2aee31126f", "pod":"calico-apiserver-d6498b48f-btb4t", "timestamp":"2026-01-23 18:58:29.131787866 +0000 UTC"}, Hostname:"ci-4459.2.3-a-2aee31126f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.131 [INFO][4828] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.156 [INFO][4828] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.156 [INFO][4828] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-a-2aee31126f' Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.225 [INFO][4828] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.237 [INFO][4828] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.261 [INFO][4828] ipam/ipam.go 511: Trying affinity for 192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.266 [INFO][4828] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.271 [INFO][4828] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.272 [INFO][4828] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.128/26 handle="k8s-pod-network.e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.283 [INFO][4828] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.303 [INFO][4828] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.128/26 handle="k8s-pod-network.e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.318 [INFO][4828] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.134/26] block=192.168.85.128/26 handle="k8s-pod-network.e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.318 [INFO][4828] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.134/26] handle="k8s-pod-network.e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.318 [INFO][4828] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:29.345795 containerd[1705]: 2026-01-23 18:58:29.318 [INFO][4828] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.134/26] IPv6=[] ContainerID="e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" HandleID="k8s-pod-network.e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0" Jan 23 18:58:29.346649 containerd[1705]: 2026-01-23 18:58:29.319 [INFO][4806] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-btb4t" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0", GenerateName:"calico-apiserver-d6498b48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cd71195-3bad-4889-98d9-2a171485ef11", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6498b48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"", Pod:"calico-apiserver-d6498b48f-btb4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66dcd2b3587", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:29.346649 containerd[1705]: 2026-01-23 18:58:29.320 [INFO][4806] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.134/32] ContainerID="e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-btb4t" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0" Jan 23 18:58:29.346649 containerd[1705]: 2026-01-23 18:58:29.320 [INFO][4806] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66dcd2b3587 ContainerID="e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-btb4t" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0" Jan 23 18:58:29.346649 containerd[1705]: 2026-01-23 18:58:29.324 [INFO][4806] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-btb4t" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0" Jan 23 18:58:29.346649 containerd[1705]: 2026-01-23 18:58:29.327 [INFO][4806] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-btb4t" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0", GenerateName:"calico-apiserver-d6498b48f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cd71195-3bad-4889-98d9-2a171485ef11", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d6498b48f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a", Pod:"calico-apiserver-d6498b48f-btb4t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali66dcd2b3587", MAC:"8e:d1:3d:6d:54:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:29.346649 containerd[1705]: 2026-01-23 18:58:29.342 [INFO][4806] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" Namespace="calico-apiserver" Pod="calico-apiserver-d6498b48f-btb4t" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--d6498b48f--btb4t-eth0" Jan 23 18:58:29.368854 systemd-networkd[1345]: calibc3a741b9c2: Gained IPv6LL Jan 23 18:58:29.369069 systemd-networkd[1345]: cali616ffe50c98: Gained IPv6LL Jan 23 18:58:29.381793 containerd[1705]: time="2026-01-23T18:58:29.381768328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8684856889-72bpc,Uid:55e05950-1192-4962-a3ec-2be48b4bca2a,Namespace:calico-system,Attempt:0,} returns sandbox id \"6d8bd02f8e900356189d9b33d2f541b138d7f78faa1bf7e8b3e34d86aed1fb73\"" Jan 23 18:58:29.383254 containerd[1705]: time="2026-01-23T18:58:29.383214696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:58:29.401692 containerd[1705]: time="2026-01-23T18:58:29.401473391Z" level=info msg="connecting to shim e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a" address="unix:///run/containerd/s/a877ca837b413396c9b1753b4e28589b544721b6ce02b4917106e8d68d17866a" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:29.427864 systemd[1]: Started cri-containerd-e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a.scope - libcontainer container e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a. Jan 23 18:58:29.530470 containerd[1705]: time="2026-01-23T18:58:29.530432002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d6498b48f-btb4t,Uid:3cd71195-3bad-4889-98d9-2a171485ef11,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"e8e729064638d422dcf19cdb5275047e316d671d86c756013301d7bc6a10419a\"" Jan 23 18:58:29.652788 containerd[1705]: time="2026-01-23T18:58:29.652738930Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:29.656870 containerd[1705]: time="2026-01-23T18:58:29.656840474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:58:29.656996 containerd[1705]: time="2026-01-23T18:58:29.656893417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:58:29.657201 kubelet[3150]: E0123 18:58:29.657164 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:29.657249 kubelet[3150]: E0123 18:58:29.657209 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:29.657430 kubelet[3150]: E0123 18:58:29.657410 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8684856889-72bpc_calico-system(55e05950-1192-4962-a3ec-2be48b4bca2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:29.657508 kubelet[3150]: E0123 18:58:29.657451 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 18:58:29.657671 containerd[1705]: time="2026-01-23T18:58:29.657651754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:29.930558 containerd[1705]: time="2026-01-23T18:58:29.930435170Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:29.936893 containerd[1705]: time="2026-01-23T18:58:29.936855524Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:29.936956 containerd[1705]: time="2026-01-23T18:58:29.936935202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:29.937110 kubelet[3150]: E0123 18:58:29.937076 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:29.937189 kubelet[3150]: E0123 18:58:29.937122 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:29.937222 kubelet[3150]: E0123 18:58:29.937203 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d6498b48f-btb4t_calico-apiserver(3cd71195-3bad-4889-98d9-2a171485ef11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:29.937268 kubelet[3150]: E0123 18:58:29.937243 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 18:58:30.025334 containerd[1705]: time="2026-01-23T18:58:30.025263003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67858cdf98-r62lm,Uid:5f08c0b6-1683-46fd-9e2c-63a438d878d9,Namespace:calico-apiserver,Attempt:0,}" Jan 23 18:58:30.119879 systemd-networkd[1345]: cali38c1dc2f0a0: Link UP Jan 23 18:58:30.121935 systemd-networkd[1345]: cali38c1dc2f0a0: Gained carrier Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.055 [INFO][4965] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.063 [INFO][4965] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0 calico-apiserver-67858cdf98- calico-apiserver 5f08c0b6-1683-46fd-9e2c-63a438d878d9 828 0 2026-01-23 18:58:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67858cdf98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459.2.3-a-2aee31126f calico-apiserver-67858cdf98-r62lm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali38c1dc2f0a0 [] [] }} ContainerID="140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" Namespace="calico-apiserver" Pod="calico-apiserver-67858cdf98-r62lm" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.063 [INFO][4965] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" Namespace="calico-apiserver" Pod="calico-apiserver-67858cdf98-r62lm" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.084 [INFO][4977] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" HandleID="k8s-pod-network.140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.085 [INFO][4977] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" HandleID="k8s-pod-network.140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c57c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4459.2.3-a-2aee31126f", "pod":"calico-apiserver-67858cdf98-r62lm", "timestamp":"2026-01-23 18:58:30.084916831 +0000 UTC"}, Hostname:"ci-4459.2.3-a-2aee31126f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.085 [INFO][4977] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.085 [INFO][4977] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.085 [INFO][4977] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-a-2aee31126f' Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.089 [INFO][4977] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.093 [INFO][4977] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.096 [INFO][4977] ipam/ipam.go 511: Trying affinity for 192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.098 [INFO][4977] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.099 [INFO][4977] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.099 [INFO][4977] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.128/26 handle="k8s-pod-network.140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.100 [INFO][4977] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.107 [INFO][4977] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.128/26 handle="k8s-pod-network.140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.115 [INFO][4977] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.135/26] block=192.168.85.128/26 handle="k8s-pod-network.140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.115 [INFO][4977] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.135/26] handle="k8s-pod-network.140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.115 [INFO][4977] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:30.141099 containerd[1705]: 2026-01-23 18:58:30.115 [INFO][4977] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.135/26] IPv6=[] ContainerID="140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" HandleID="k8s-pod-network.140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" Workload="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0" Jan 23 18:58:30.141807 containerd[1705]: 2026-01-23 18:58:30.117 [INFO][4965] cni-plugin/k8s.go 418: Populated endpoint ContainerID="140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" Namespace="calico-apiserver" Pod="calico-apiserver-67858cdf98-r62lm" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0", GenerateName:"calico-apiserver-67858cdf98-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f08c0b6-1683-46fd-9e2c-63a438d878d9", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67858cdf98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"", Pod:"calico-apiserver-67858cdf98-r62lm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38c1dc2f0a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:30.141807 containerd[1705]: 2026-01-23 18:58:30.117 [INFO][4965] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.135/32] ContainerID="140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" Namespace="calico-apiserver" Pod="calico-apiserver-67858cdf98-r62lm" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0" Jan 23 18:58:30.141807 containerd[1705]: 2026-01-23 18:58:30.117 [INFO][4965] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38c1dc2f0a0 ContainerID="140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" Namespace="calico-apiserver" Pod="calico-apiserver-67858cdf98-r62lm" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0" Jan 23 18:58:30.141807 containerd[1705]: 2026-01-23 18:58:30.121 [INFO][4965] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" Namespace="calico-apiserver" Pod="calico-apiserver-67858cdf98-r62lm" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0" Jan 23 18:58:30.141807 containerd[1705]: 2026-01-23 18:58:30.122 [INFO][4965] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" Namespace="calico-apiserver" Pod="calico-apiserver-67858cdf98-r62lm" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0", GenerateName:"calico-apiserver-67858cdf98-", Namespace:"calico-apiserver", SelfLink:"", UID:"5f08c0b6-1683-46fd-9e2c-63a438d878d9", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67858cdf98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf", Pod:"calico-apiserver-67858cdf98-r62lm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.85.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali38c1dc2f0a0", MAC:"22:98:3a:e9:03:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:30.141807 containerd[1705]: 2026-01-23 18:58:30.139 [INFO][4965] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" Namespace="calico-apiserver" Pod="calico-apiserver-67858cdf98-r62lm" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-calico--apiserver--67858cdf98--r62lm-eth0" Jan 23 18:58:30.179146 kubelet[3150]: E0123 18:58:30.179108 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 18:58:30.180860 kubelet[3150]: E0123 18:58:30.180775 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 18:58:30.199575 containerd[1705]: time="2026-01-23T18:58:30.199542630Z" level=info msg="connecting to shim 140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf" address="unix:///run/containerd/s/9469d3eb34dd97865958d80f6fbfa6dbaf1398e9adcad9f26eb9b3f5ca28b12e" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:30.230924 systemd[1]: Started cri-containerd-140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf.scope - libcontainer container 140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf. Jan 23 18:58:30.273537 containerd[1705]: time="2026-01-23T18:58:30.273502819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67858cdf98-r62lm,Uid:5f08c0b6-1683-46fd-9e2c-63a438d878d9,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"140a71166016ff1dcdd581eb1654c3149ab634c02a4b5b6bbafb882d80ccdcaf\"" Jan 23 18:58:30.274807 containerd[1705]: time="2026-01-23T18:58:30.274783437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:30.538313 containerd[1705]: time="2026-01-23T18:58:30.538160137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:30.541703 containerd[1705]: time="2026-01-23T18:58:30.541364462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:30.542010 containerd[1705]: time="2026-01-23T18:58:30.541811704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:30.542231 kubelet[3150]: E0123 18:58:30.542179 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:30.542820 kubelet[3150]: E0123 18:58:30.542557 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:30.542820 kubelet[3150]: E0123 18:58:30.542785 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67858cdf98-r62lm_calico-apiserver(5f08c0b6-1683-46fd-9e2c-63a438d878d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:30.543455 kubelet[3150]: E0123 18:58:30.543287 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 18:58:30.776948 systemd-networkd[1345]: cali66dcd2b3587: Gained IPv6LL Jan 23 18:58:30.904854 systemd-networkd[1345]: cali76746ff362b: Gained IPv6LL Jan 23 18:58:31.025886 containerd[1705]: time="2026-01-23T18:58:31.025837471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6qgwf,Uid:41eb4aaa-d676-42e0-9ada-ef9e2ad0459b,Namespace:calico-system,Attempt:0,}" Jan 23 18:58:31.030556 containerd[1705]: time="2026-01-23T18:58:31.030519423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7q979,Uid:febc0b92-9a4e-4d79-856f-cbca4e1c9361,Namespace:kube-system,Attempt:0,}" Jan 23 18:58:31.138923 systemd-networkd[1345]: cali13570f90720: Link UP Jan 23 18:58:31.139091 systemd-networkd[1345]: cali13570f90720: Gained carrier Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.062 [INFO][5058] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.072 [INFO][5058] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0 goldmane-7c778bb748- calico-system 41eb4aaa-d676-42e0-9ada-ef9e2ad0459b 829 0 2026-01-23 18:58:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459.2.3-a-2aee31126f goldmane-7c778bb748-6qgwf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali13570f90720 [] [] }} ContainerID="e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgwf" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.073 [INFO][5058] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgwf" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.105 [INFO][5084] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" HandleID="k8s-pod-network.e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" Workload="ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.105 [INFO][5084] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" HandleID="k8s-pod-network.e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" Workload="ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5130), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459.2.3-a-2aee31126f", "pod":"goldmane-7c778bb748-6qgwf", "timestamp":"2026-01-23 18:58:31.105228707 +0000 UTC"}, Hostname:"ci-4459.2.3-a-2aee31126f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.105 [INFO][5084] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.105 [INFO][5084] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.105 [INFO][5084] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-a-2aee31126f' Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.111 [INFO][5084] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.113 [INFO][5084] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.116 [INFO][5084] ipam/ipam.go 511: Trying affinity for 192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.117 [INFO][5084] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.119 [INFO][5084] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.119 [INFO][5084] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.128/26 handle="k8s-pod-network.e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.120 [INFO][5084] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9 Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.124 [INFO][5084] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.128/26 handle="k8s-pod-network.e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.132 [INFO][5084] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.136/26] block=192.168.85.128/26 handle="k8s-pod-network.e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.132 [INFO][5084] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.136/26] handle="k8s-pod-network.e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.133 [INFO][5084] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:31.154336 containerd[1705]: 2026-01-23 18:58:31.133 [INFO][5084] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.136/26] IPv6=[] ContainerID="e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" HandleID="k8s-pod-network.e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" Workload="ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0" Jan 23 18:58:31.155297 containerd[1705]: 2026-01-23 18:58:31.134 [INFO][5058] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgwf" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"41eb4aaa-d676-42e0-9ada-ef9e2ad0459b", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"", Pod:"goldmane-7c778bb748-6qgwf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali13570f90720", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:31.155297 containerd[1705]: 2026-01-23 18:58:31.136 [INFO][5058] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.136/32] ContainerID="e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgwf" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0" Jan 23 18:58:31.155297 containerd[1705]: 2026-01-23 18:58:31.136 [INFO][5058] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13570f90720 ContainerID="e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgwf" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0" Jan 23 18:58:31.155297 containerd[1705]: 2026-01-23 18:58:31.139 [INFO][5058] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgwf" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0" Jan 23 18:58:31.155297 containerd[1705]: 2026-01-23 18:58:31.140 [INFO][5058] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgwf" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"41eb4aaa-d676-42e0-9ada-ef9e2ad0459b", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 58, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9", Pod:"goldmane-7c778bb748-6qgwf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.85.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali13570f90720", MAC:"a6:b9:59:52:30:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:31.155297 containerd[1705]: 2026-01-23 18:58:31.151 [INFO][5058] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" Namespace="calico-system" Pod="goldmane-7c778bb748-6qgwf" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-goldmane--7c778bb748--6qgwf-eth0" Jan 23 18:58:31.184429 kubelet[3150]: E0123 18:58:31.184167 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 18:58:31.185194 kubelet[3150]: E0123 18:58:31.185143 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 18:58:31.186687 kubelet[3150]: E0123 18:58:31.186492 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 18:58:31.200496 containerd[1705]: time="2026-01-23T18:58:31.200457247Z" level=info msg="connecting to shim e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9" address="unix:///run/containerd/s/039d2dce03aababda08ada93be1ebd31e07d039d012eacda09e9f372ba56498c" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:31.232940 systemd[1]: Started cri-containerd-e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9.scope - libcontainer container e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9. Jan 23 18:58:31.287446 systemd-networkd[1345]: calib30befa6c4c: Link UP Jan 23 18:58:31.287632 systemd-networkd[1345]: calib30befa6c4c: Gained carrier Jan 23 18:58:31.289417 containerd[1705]: time="2026-01-23T18:58:31.289365983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-6qgwf,Uid:41eb4aaa-d676-42e0-9ada-ef9e2ad0459b,Namespace:calico-system,Attempt:0,} returns sandbox id \"e73e6164f650b3e3392133c221cf39d43d06a73e8b36d67f768abf26d86683e9\"" Jan 23 18:58:31.293722 containerd[1705]: time="2026-01-23T18:58:31.293403614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.068 [INFO][5069] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.079 [INFO][5069] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0 coredns-66bc5c9577- kube-system febc0b92-9a4e-4d79-856f-cbca4e1c9361 821 0 2026-01-23 18:57:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459.2.3-a-2aee31126f coredns-66bc5c9577-7q979 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib30befa6c4c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" Namespace="kube-system" Pod="coredns-66bc5c9577-7q979" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.079 [INFO][5069] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" Namespace="kube-system" Pod="coredns-66bc5c9577-7q979" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.109 [INFO][5090] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" HandleID="k8s-pod-network.a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" Workload="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.110 [INFO][5090] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" HandleID="k8s-pod-network.a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" Workload="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cafe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459.2.3-a-2aee31126f", "pod":"coredns-66bc5c9577-7q979", "timestamp":"2026-01-23 18:58:31.10989237 +0000 UTC"}, Hostname:"ci-4459.2.3-a-2aee31126f", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.110 [INFO][5090] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.132 [INFO][5090] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.132 [INFO][5090] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459.2.3-a-2aee31126f' Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.216 [INFO][5090] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.235 [INFO][5090] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.249 [INFO][5090] ipam/ipam.go 511: Trying affinity for 192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.260 [INFO][5090] ipam/ipam.go 158: Attempting to load block cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.262 [INFO][5090] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.85.128/26 host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.263 [INFO][5090] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.85.128/26 handle="k8s-pod-network.a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.264 [INFO][5090] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.272 [INFO][5090] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.85.128/26 handle="k8s-pod-network.a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.283 [INFO][5090] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.85.137/26] block=192.168.85.128/26 handle="k8s-pod-network.a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.283 [INFO][5090] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.85.137/26] handle="k8s-pod-network.a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" host="ci-4459.2.3-a-2aee31126f" Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.283 [INFO][5090] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 18:58:31.305563 containerd[1705]: 2026-01-23 18:58:31.283 [INFO][5090] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.85.137/26] IPv6=[] ContainerID="a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" HandleID="k8s-pod-network.a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" Workload="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0" Jan 23 18:58:31.306134 containerd[1705]: 2026-01-23 18:58:31.285 [INFO][5069] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" Namespace="kube-system" Pod="coredns-66bc5c9577-7q979" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"febc0b92-9a4e-4d79-856f-cbca4e1c9361", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"", Pod:"coredns-66bc5c9577-7q979", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib30befa6c4c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:31.306134 containerd[1705]: 2026-01-23 18:58:31.285 [INFO][5069] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.85.137/32] ContainerID="a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" Namespace="kube-system" Pod="coredns-66bc5c9577-7q979" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0" Jan 23 18:58:31.306134 containerd[1705]: 2026-01-23 18:58:31.285 [INFO][5069] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib30befa6c4c ContainerID="a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" Namespace="kube-system" Pod="coredns-66bc5c9577-7q979" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0" Jan 23 18:58:31.306134 containerd[1705]: 2026-01-23 18:58:31.288 [INFO][5069] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" Namespace="kube-system" Pod="coredns-66bc5c9577-7q979" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0" Jan 23 18:58:31.306134 containerd[1705]: 2026-01-23 18:58:31.288 [INFO][5069] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" Namespace="kube-system" Pod="coredns-66bc5c9577-7q979" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"febc0b92-9a4e-4d79-856f-cbca4e1c9361", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 18, 57, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459.2.3-a-2aee31126f", ContainerID:"a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc", Pod:"coredns-66bc5c9577-7q979", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.85.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib30befa6c4c", MAC:"12:15:65:d4:4f:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 18:58:31.306364 containerd[1705]: 2026-01-23 18:58:31.304 [INFO][5069] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" Namespace="kube-system" Pod="coredns-66bc5c9577-7q979" WorkloadEndpoint="ci--4459.2.3--a--2aee31126f-k8s-coredns--66bc5c9577--7q979-eth0" Jan 23 18:58:31.349146 containerd[1705]: time="2026-01-23T18:58:31.349080496Z" level=info msg="connecting to shim a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc" address="unix:///run/containerd/s/1816a260cce1a52e162367c95eb6d92a0a98744fe22c475e769567d4ee45c3ee" namespace=k8s.io protocol=ttrpc version=3 Jan 23 18:58:31.374872 systemd[1]: Started cri-containerd-a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc.scope - libcontainer container a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc. Jan 23 18:58:31.436465 containerd[1705]: time="2026-01-23T18:58:31.435706147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-7q979,Uid:febc0b92-9a4e-4d79-856f-cbca4e1c9361,Namespace:kube-system,Attempt:0,} returns sandbox id \"a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc\"" Jan 23 18:58:31.447365 containerd[1705]: time="2026-01-23T18:58:31.447337767Z" level=info msg="CreateContainer within sandbox \"a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 18:58:31.478773 containerd[1705]: time="2026-01-23T18:58:31.477804424Z" level=info msg="Container 168c183aa0cd77a5ed5e0b018125a81bdf45aacdc9e17401ce9b313b213fe336: CDI devices from CRI Config.CDIDevices: []" Jan 23 18:58:31.498887 containerd[1705]: time="2026-01-23T18:58:31.498851690Z" level=info msg="CreateContainer within sandbox \"a46ce483403e1917f6a2ca8685dd25b4170a02dc1c9def551080685604bb0bdc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"168c183aa0cd77a5ed5e0b018125a81bdf45aacdc9e17401ce9b313b213fe336\"" Jan 23 18:58:31.499745 containerd[1705]: time="2026-01-23T18:58:31.499641231Z" level=info msg="StartContainer for \"168c183aa0cd77a5ed5e0b018125a81bdf45aacdc9e17401ce9b313b213fe336\"" Jan 23 18:58:31.501309 containerd[1705]: time="2026-01-23T18:58:31.501244804Z" level=info msg="connecting to shim 168c183aa0cd77a5ed5e0b018125a81bdf45aacdc9e17401ce9b313b213fe336" address="unix:///run/containerd/s/1816a260cce1a52e162367c95eb6d92a0a98744fe22c475e769567d4ee45c3ee" protocol=ttrpc version=3 Jan 23 18:58:31.522827 systemd[1]: Started cri-containerd-168c183aa0cd77a5ed5e0b018125a81bdf45aacdc9e17401ce9b313b213fe336.scope - libcontainer container 168c183aa0cd77a5ed5e0b018125a81bdf45aacdc9e17401ce9b313b213fe336. Jan 23 18:58:31.558911 containerd[1705]: time="2026-01-23T18:58:31.558875317Z" level=info msg="StartContainer for \"168c183aa0cd77a5ed5e0b018125a81bdf45aacdc9e17401ce9b313b213fe336\" returns successfully" Jan 23 18:58:31.596663 containerd[1705]: time="2026-01-23T18:58:31.596613069Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:31.599733 containerd[1705]: time="2026-01-23T18:58:31.599602874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:58:31.599843 containerd[1705]: time="2026-01-23T18:58:31.599655733Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:31.600070 kubelet[3150]: E0123 18:58:31.600037 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:31.600444 kubelet[3150]: E0123 18:58:31.600091 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:31.600444 kubelet[3150]: E0123 18:58:31.600186 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6qgwf_calico-system(41eb4aaa-d676-42e0-9ada-ef9e2ad0459b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:31.600444 kubelet[3150]: E0123 18:58:31.600394 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 18:58:31.800865 systemd-networkd[1345]: cali38c1dc2f0a0: Gained IPv6LL Jan 23 18:58:32.192273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1298329641.mount: Deactivated successfully. Jan 23 18:58:32.197715 kubelet[3150]: E0123 18:58:32.196974 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 18:58:32.197715 kubelet[3150]: E0123 18:58:32.197034 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 18:58:32.213438 kubelet[3150]: I0123 18:58:32.213389 3150 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7q979" podStartSLOduration=42.213373864 podStartE2EDuration="42.213373864s" podCreationTimestamp="2026-01-23 18:57:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 18:58:32.207963236 +0000 UTC m=+47.280463872" watchObservedRunningTime="2026-01-23 18:58:32.213373864 +0000 UTC m=+47.285874498" Jan 23 18:58:32.248824 systemd-networkd[1345]: cali13570f90720: Gained IPv6LL Jan 23 18:58:32.760954 systemd-networkd[1345]: calib30befa6c4c: Gained IPv6LL Jan 23 18:58:32.920058 kubelet[3150]: I0123 18:58:32.919809 3150 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 18:58:33.196392 kubelet[3150]: E0123 18:58:33.196233 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 18:58:33.806968 systemd-networkd[1345]: vxlan.calico: Link UP Jan 23 18:58:33.806984 systemd-networkd[1345]: vxlan.calico: Gained carrier Jan 23 18:58:35.000835 systemd-networkd[1345]: vxlan.calico: Gained IPv6LL Jan 23 18:58:37.022146 containerd[1705]: time="2026-01-23T18:58:37.021570796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:58:37.296364 containerd[1705]: time="2026-01-23T18:58:37.296231570Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:37.300285 containerd[1705]: time="2026-01-23T18:58:37.299630787Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:58:37.300285 containerd[1705]: time="2026-01-23T18:58:37.299999191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:58:37.300564 kubelet[3150]: E0123 18:58:37.300533 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:37.300889 kubelet[3150]: E0123 18:58:37.300873 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:58:37.301374 kubelet[3150]: E0123 18:58:37.301355 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9ffdfbd54-jwxd4_calico-system(485a4546-1705-4b94-89b0-63e51dc5fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:37.305108 containerd[1705]: time="2026-01-23T18:58:37.305081612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:58:37.584499 containerd[1705]: time="2026-01-23T18:58:37.584369324Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:37.587434 containerd[1705]: time="2026-01-23T18:58:37.587362275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:58:37.587434 containerd[1705]: time="2026-01-23T18:58:37.587400354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:58:37.587621 kubelet[3150]: E0123 18:58:37.587582 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:37.587703 kubelet[3150]: E0123 18:58:37.587637 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:58:37.587757 kubelet[3150]: E0123 18:58:37.587734 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9ffdfbd54-jwxd4_calico-system(485a4546-1705-4b94-89b0-63e51dc5fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:37.587827 kubelet[3150]: E0123 18:58:37.587788 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 18:58:40.021263 containerd[1705]: time="2026-01-23T18:58:40.020982112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:58:40.285231 containerd[1705]: time="2026-01-23T18:58:40.285093392Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:40.288241 containerd[1705]: time="2026-01-23T18:58:40.288196730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:58:40.288323 containerd[1705]: time="2026-01-23T18:58:40.288210518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:58:40.288459 kubelet[3150]: E0123 18:58:40.288425 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:40.288759 kubelet[3150]: E0123 18:58:40.288468 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:58:40.288759 kubelet[3150]: E0123 18:58:40.288541 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-w8lzq_calico-system(ff2219dc-84d2-48b9-9cbc-2450678772ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:40.289712 containerd[1705]: time="2026-01-23T18:58:40.289673601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:58:40.553273 containerd[1705]: time="2026-01-23T18:58:40.553147910Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:40.556132 containerd[1705]: time="2026-01-23T18:58:40.556022426Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:58:40.556132 containerd[1705]: time="2026-01-23T18:58:40.556107178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:58:40.556511 kubelet[3150]: E0123 18:58:40.556372 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:40.556575 kubelet[3150]: E0123 18:58:40.556518 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:58:40.556803 kubelet[3150]: E0123 18:58:40.556785 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-w8lzq_calico-system(ff2219dc-84d2-48b9-9cbc-2450678772ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:40.557751 kubelet[3150]: E0123 18:58:40.557701 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:58:43.022444 containerd[1705]: time="2026-01-23T18:58:43.022378338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:43.283285 containerd[1705]: time="2026-01-23T18:58:43.283153297Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:43.288386 containerd[1705]: time="2026-01-23T18:58:43.288345765Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:43.288457 containerd[1705]: time="2026-01-23T18:58:43.288418841Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:43.288602 kubelet[3150]: E0123 18:58:43.288559 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:43.288908 kubelet[3150]: E0123 18:58:43.288602 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:43.288908 kubelet[3150]: E0123 18:58:43.288754 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d6498b48f-btb4t_calico-apiserver(3cd71195-3bad-4889-98d9-2a171485ef11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:43.288908 kubelet[3150]: E0123 18:58:43.288788 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 18:58:43.289709 containerd[1705]: time="2026-01-23T18:58:43.289161979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:43.583961 containerd[1705]: time="2026-01-23T18:58:43.583844276Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:43.586839 containerd[1705]: time="2026-01-23T18:58:43.586794681Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:43.586944 containerd[1705]: time="2026-01-23T18:58:43.586799125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:43.587034 kubelet[3150]: E0123 18:58:43.587003 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:43.587101 kubelet[3150]: E0123 18:58:43.587044 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:43.587145 kubelet[3150]: E0123 18:58:43.587121 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d6498b48f-qmxqv_calico-apiserver(5141cf66-e083-4b00-b023-85272b62f472): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:43.587176 kubelet[3150]: E0123 18:58:43.587160 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 18:58:46.021175 containerd[1705]: time="2026-01-23T18:58:46.021097889Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:58:46.286423 containerd[1705]: time="2026-01-23T18:58:46.286254560Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:46.290316 containerd[1705]: time="2026-01-23T18:58:46.290269951Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:58:46.290372 containerd[1705]: time="2026-01-23T18:58:46.290340493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:58:46.290504 kubelet[3150]: E0123 18:58:46.290460 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:46.290865 kubelet[3150]: E0123 18:58:46.290515 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:58:46.290865 kubelet[3150]: E0123 18:58:46.290737 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8684856889-72bpc_calico-system(55e05950-1192-4962-a3ec-2be48b4bca2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:46.290865 kubelet[3150]: E0123 18:58:46.290773 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 18:58:46.291328 containerd[1705]: time="2026-01-23T18:58:46.291307129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:58:46.555008 containerd[1705]: time="2026-01-23T18:58:46.554888209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:46.558462 containerd[1705]: time="2026-01-23T18:58:46.558418953Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:58:46.558537 containerd[1705]: time="2026-01-23T18:58:46.558421516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:46.558706 kubelet[3150]: E0123 18:58:46.558648 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:46.558774 kubelet[3150]: E0123 18:58:46.558717 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:58:46.558830 kubelet[3150]: E0123 18:58:46.558811 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67858cdf98-r62lm_calico-apiserver(5f08c0b6-1683-46fd-9e2c-63a438d878d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:46.558911 kubelet[3150]: E0123 18:58:46.558874 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 18:58:48.021296 containerd[1705]: time="2026-01-23T18:58:48.021202996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:58:48.283884 containerd[1705]: time="2026-01-23T18:58:48.283759131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:58:48.287206 containerd[1705]: time="2026-01-23T18:58:48.287089826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:58:48.287206 containerd[1705]: time="2026-01-23T18:58:48.287113567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:58:48.287425 kubelet[3150]: E0123 18:58:48.287373 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:48.287738 kubelet[3150]: E0123 18:58:48.287433 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:58:48.287738 kubelet[3150]: E0123 18:58:48.287505 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6qgwf_calico-system(41eb4aaa-d676-42e0-9ada-ef9e2ad0459b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:58:48.287738 kubelet[3150]: E0123 18:58:48.287537 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 18:58:53.023331 kubelet[3150]: E0123 18:58:53.023214 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 18:58:56.025185 kubelet[3150]: E0123 18:58:56.025139 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:58:58.022043 kubelet[3150]: E0123 18:58:58.021785 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 18:58:59.025797 kubelet[3150]: E0123 18:58:59.023911 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 18:58:59.027183 kubelet[3150]: E0123 18:58:59.027149 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 18:59:01.027545 kubelet[3150]: E0123 18:59:01.027499 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 18:59:03.021360 kubelet[3150]: E0123 18:59:03.021317 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 18:59:08.021602 containerd[1705]: time="2026-01-23T18:59:08.021560823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:59:08.305074 containerd[1705]: time="2026-01-23T18:59:08.304811454Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:08.308234 containerd[1705]: time="2026-01-23T18:59:08.308204378Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:59:08.308392 containerd[1705]: time="2026-01-23T18:59:08.308277643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:59:08.308539 kubelet[3150]: E0123 18:59:08.308393 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:59:08.308539 kubelet[3150]: E0123 18:59:08.308433 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:59:08.310372 kubelet[3150]: E0123 18:59:08.308625 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-w8lzq_calico-system(ff2219dc-84d2-48b9-9cbc-2450678772ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:08.310416 containerd[1705]: time="2026-01-23T18:59:08.308828018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:59:08.591967 containerd[1705]: time="2026-01-23T18:59:08.591438275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:08.594714 containerd[1705]: time="2026-01-23T18:59:08.594625724Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:59:08.595797 containerd[1705]: time="2026-01-23T18:59:08.594701783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:59:08.596724 kubelet[3150]: E0123 18:59:08.596047 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:59:08.596724 kubelet[3150]: E0123 18:59:08.596093 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:59:08.596724 kubelet[3150]: E0123 18:59:08.596281 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9ffdfbd54-jwxd4_calico-system(485a4546-1705-4b94-89b0-63e51dc5fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:08.597195 containerd[1705]: time="2026-01-23T18:59:08.597174812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:59:08.870503 containerd[1705]: time="2026-01-23T18:59:08.870010931Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:08.873388 containerd[1705]: time="2026-01-23T18:59:08.873081259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:59:08.873720 containerd[1705]: time="2026-01-23T18:59:08.873294462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:59:08.873896 kubelet[3150]: E0123 18:59:08.873842 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:59:08.873946 kubelet[3150]: E0123 18:59:08.873908 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:59:08.874084 kubelet[3150]: E0123 18:59:08.874067 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-w8lzq_calico-system(ff2219dc-84d2-48b9-9cbc-2450678772ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:08.874160 kubelet[3150]: E0123 18:59:08.874114 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:59:08.875517 containerd[1705]: time="2026-01-23T18:59:08.874595223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:59:09.141533 containerd[1705]: time="2026-01-23T18:59:09.141398845Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:09.144415 containerd[1705]: time="2026-01-23T18:59:09.144294329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:59:09.144415 containerd[1705]: time="2026-01-23T18:59:09.144389919Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:59:09.144759 kubelet[3150]: E0123 18:59:09.144713 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:59:09.144845 kubelet[3150]: E0123 18:59:09.144833 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:59:09.145017 kubelet[3150]: E0123 18:59:09.144975 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9ffdfbd54-jwxd4_calico-system(485a4546-1705-4b94-89b0-63e51dc5fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:09.145451 kubelet[3150]: E0123 18:59:09.145410 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 18:59:10.021557 containerd[1705]: time="2026-01-23T18:59:10.021455318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:59:10.285391 containerd[1705]: time="2026-01-23T18:59:10.285257977Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:10.288327 containerd[1705]: time="2026-01-23T18:59:10.288261387Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:59:10.288432 containerd[1705]: time="2026-01-23T18:59:10.288297368Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:59:10.288525 kubelet[3150]: E0123 18:59:10.288497 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:10.288775 kubelet[3150]: E0123 18:59:10.288533 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:10.288775 kubelet[3150]: E0123 18:59:10.288604 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d6498b48f-qmxqv_calico-apiserver(5141cf66-e083-4b00-b023-85272b62f472): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:10.288775 kubelet[3150]: E0123 18:59:10.288638 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 18:59:12.021696 containerd[1705]: time="2026-01-23T18:59:12.021592383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:59:12.293557 containerd[1705]: time="2026-01-23T18:59:12.293403639Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:12.297240 containerd[1705]: time="2026-01-23T18:59:12.297183878Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:59:12.297347 containerd[1705]: time="2026-01-23T18:59:12.297286365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:59:12.297618 kubelet[3150]: E0123 18:59:12.297575 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:12.297936 kubelet[3150]: E0123 18:59:12.297629 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:12.297936 kubelet[3150]: E0123 18:59:12.297722 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d6498b48f-btb4t_calico-apiserver(3cd71195-3bad-4889-98d9-2a171485ef11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:12.297936 kubelet[3150]: E0123 18:59:12.297758 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 18:59:13.023042 containerd[1705]: time="2026-01-23T18:59:13.022278641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:59:13.296070 containerd[1705]: time="2026-01-23T18:59:13.295939002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:13.299110 containerd[1705]: time="2026-01-23T18:59:13.299034836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:59:13.299367 containerd[1705]: time="2026-01-23T18:59:13.299093993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:59:13.299523 kubelet[3150]: E0123 18:59:13.299487 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:13.300271 kubelet[3150]: E0123 18:59:13.299863 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:13.300271 kubelet[3150]: E0123 18:59:13.299954 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67858cdf98-r62lm_calico-apiserver(5f08c0b6-1683-46fd-9e2c-63a438d878d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:13.300271 kubelet[3150]: E0123 18:59:13.299990 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 18:59:14.022324 containerd[1705]: time="2026-01-23T18:59:14.022282921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 18:59:14.289607 containerd[1705]: time="2026-01-23T18:59:14.289371138Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:14.292388 containerd[1705]: time="2026-01-23T18:59:14.292332619Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 18:59:14.292388 containerd[1705]: time="2026-01-23T18:59:14.292365496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 18:59:14.292569 kubelet[3150]: E0123 18:59:14.292498 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:59:14.292569 kubelet[3150]: E0123 18:59:14.292539 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 18:59:14.292651 kubelet[3150]: E0123 18:59:14.292613 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8684856889-72bpc_calico-system(55e05950-1192-4962-a3ec-2be48b4bca2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:14.292749 kubelet[3150]: E0123 18:59:14.292647 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 18:59:15.027462 containerd[1705]: time="2026-01-23T18:59:15.027417609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 18:59:15.288466 containerd[1705]: time="2026-01-23T18:59:15.288261667Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:15.291590 containerd[1705]: time="2026-01-23T18:59:15.291498708Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 18:59:15.292323 containerd[1705]: time="2026-01-23T18:59:15.291976451Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 18:59:15.292516 kubelet[3150]: E0123 18:59:15.292465 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:59:15.293567 kubelet[3150]: E0123 18:59:15.292518 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 18:59:15.293567 kubelet[3150]: E0123 18:59:15.292726 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6qgwf_calico-system(41eb4aaa-d676-42e0-9ada-ef9e2ad0459b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:15.293770 kubelet[3150]: E0123 18:59:15.293740 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 18:59:20.022955 kubelet[3150]: E0123 18:59:20.022920 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 18:59:23.022252 kubelet[3150]: E0123 18:59:23.021742 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 18:59:23.022252 kubelet[3150]: E0123 18:59:23.022111 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:59:24.021045 kubelet[3150]: E0123 18:59:24.021005 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 18:59:27.023702 kubelet[3150]: E0123 18:59:27.023152 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 18:59:27.029558 kubelet[3150]: E0123 18:59:27.029528 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 18:59:28.022881 kubelet[3150]: E0123 18:59:28.022838 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 18:59:30.056259 systemd[1]: Started sshd@7-10.200.4.15:22-10.200.16.10:59684.service - OpenSSH per-connection server daemon (10.200.16.10:59684). Jan 23 18:59:30.690243 sshd[5519]: Accepted publickey for core from 10.200.16.10 port 59684 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:59:30.691288 sshd-session[5519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:30.695980 systemd-logind[1687]: New session 10 of user core. Jan 23 18:59:30.700823 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 18:59:31.024313 kubelet[3150]: E0123 18:59:31.024246 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 18:59:31.266226 sshd[5522]: Connection closed by 10.200.16.10 port 59684 Jan 23 18:59:31.266809 sshd-session[5519]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:31.272810 systemd-logind[1687]: Session 10 logged out. Waiting for processes to exit. Jan 23 18:59:31.273383 systemd[1]: sshd@7-10.200.4.15:22-10.200.16.10:59684.service: Deactivated successfully. Jan 23 18:59:31.276910 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 18:59:31.281380 systemd-logind[1687]: Removed session 10. Jan 23 18:59:34.021241 kubelet[3150]: E0123 18:59:34.021195 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 18:59:35.024706 kubelet[3150]: E0123 18:59:35.024452 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:59:36.378929 systemd[1]: Started sshd@8-10.200.4.15:22-10.200.16.10:59694.service - OpenSSH per-connection server daemon (10.200.16.10:59694). Jan 23 18:59:37.004869 sshd[5534]: Accepted publickey for core from 10.200.16.10 port 59694 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:59:37.006024 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:37.010701 systemd-logind[1687]: New session 11 of user core. Jan 23 18:59:37.015394 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 18:59:37.494889 sshd[5537]: Connection closed by 10.200.16.10 port 59694 Jan 23 18:59:37.496861 sshd-session[5534]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:37.503361 systemd-logind[1687]: Session 11 logged out. Waiting for processes to exit. Jan 23 18:59:37.503740 systemd[1]: sshd@8-10.200.4.15:22-10.200.16.10:59694.service: Deactivated successfully. Jan 23 18:59:37.505551 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 18:59:37.509297 systemd-logind[1687]: Removed session 11. Jan 23 18:59:38.021340 kubelet[3150]: E0123 18:59:38.021298 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 18:59:39.024713 kubelet[3150]: E0123 18:59:39.023399 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 18:59:42.021100 kubelet[3150]: E0123 18:59:42.021042 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 18:59:42.624971 systemd[1]: Started sshd@9-10.200.4.15:22-10.200.16.10:51308.service - OpenSSH per-connection server daemon (10.200.16.10:51308). Jan 23 18:59:43.024713 kubelet[3150]: E0123 18:59:43.022904 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 18:59:43.257761 sshd[5550]: Accepted publickey for core from 10.200.16.10 port 51308 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:59:43.258912 sshd-session[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:43.263423 systemd-logind[1687]: New session 12 of user core. Jan 23 18:59:43.270866 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 18:59:43.744273 sshd[5553]: Connection closed by 10.200.16.10 port 51308 Jan 23 18:59:43.745035 sshd-session[5550]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:43.748570 systemd[1]: sshd@9-10.200.4.15:22-10.200.16.10:51308.service: Deactivated successfully. Jan 23 18:59:43.750498 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 18:59:43.751312 systemd-logind[1687]: Session 12 logged out. Waiting for processes to exit. Jan 23 18:59:43.752669 systemd-logind[1687]: Removed session 12. Jan 23 18:59:43.853838 systemd[1]: Started sshd@10-10.200.4.15:22-10.200.16.10:51318.service - OpenSSH per-connection server daemon (10.200.16.10:51318). Jan 23 18:59:44.488811 sshd[5566]: Accepted publickey for core from 10.200.16.10 port 51318 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:59:44.490572 sshd-session[5566]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:44.497177 systemd-logind[1687]: New session 13 of user core. Jan 23 18:59:44.502853 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 18:59:45.015924 sshd[5569]: Connection closed by 10.200.16.10 port 51318 Jan 23 18:59:45.017612 sshd-session[5566]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:45.021849 systemd-logind[1687]: Session 13 logged out. Waiting for processes to exit. Jan 23 18:59:45.022649 systemd[1]: sshd@10-10.200.4.15:22-10.200.16.10:51318.service: Deactivated successfully. Jan 23 18:59:45.025343 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 18:59:45.027057 kubelet[3150]: E0123 18:59:45.026588 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 18:59:45.030211 systemd-logind[1687]: Removed session 13. Jan 23 18:59:45.129667 systemd[1]: Started sshd@11-10.200.4.15:22-10.200.16.10:51334.service - OpenSSH per-connection server daemon (10.200.16.10:51334). Jan 23 18:59:45.753387 sshd[5581]: Accepted publickey for core from 10.200.16.10 port 51334 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:59:45.756726 sshd-session[5581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:45.761144 systemd-logind[1687]: New session 14 of user core. Jan 23 18:59:45.768854 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 18:59:46.023336 kubelet[3150]: E0123 18:59:46.022379 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 18:59:46.269656 sshd[5588]: Connection closed by 10.200.16.10 port 51334 Jan 23 18:59:46.271252 sshd-session[5581]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:46.274635 systemd[1]: sshd@11-10.200.4.15:22-10.200.16.10:51334.service: Deactivated successfully. Jan 23 18:59:46.276071 systemd-logind[1687]: Session 14 logged out. Waiting for processes to exit. Jan 23 18:59:46.276537 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 18:59:46.278126 systemd-logind[1687]: Removed session 14. Jan 23 18:59:49.023166 containerd[1705]: time="2026-01-23T18:59:49.022870882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 18:59:49.302594 containerd[1705]: time="2026-01-23T18:59:49.302465862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:49.305598 containerd[1705]: time="2026-01-23T18:59:49.305496817Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 18:59:49.306134 containerd[1705]: time="2026-01-23T18:59:49.305699952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 18:59:49.306408 kubelet[3150]: E0123 18:59:49.306349 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:59:49.307201 kubelet[3150]: E0123 18:59:49.306797 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 18:59:49.307201 kubelet[3150]: E0123 18:59:49.306927 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-w8lzq_calico-system(ff2219dc-84d2-48b9-9cbc-2450678772ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:49.309728 containerd[1705]: time="2026-01-23T18:59:49.308897161Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 18:59:49.596515 containerd[1705]: time="2026-01-23T18:59:49.596300779Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:49.599751 containerd[1705]: time="2026-01-23T18:59:49.599713738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 18:59:49.599828 containerd[1705]: time="2026-01-23T18:59:49.599722183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 18:59:49.599991 kubelet[3150]: E0123 18:59:49.599948 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:59:49.600054 kubelet[3150]: E0123 18:59:49.600003 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 18:59:49.600120 kubelet[3150]: E0123 18:59:49.600101 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-w8lzq_calico-system(ff2219dc-84d2-48b9-9cbc-2450678772ac): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:49.600324 kubelet[3150]: E0123 18:59:49.600270 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 18:59:51.381927 systemd[1]: Started sshd@12-10.200.4.15:22-10.200.16.10:45916.service - OpenSSH per-connection server daemon (10.200.16.10:45916). Jan 23 18:59:52.000445 sshd[5602]: Accepted publickey for core from 10.200.16.10 port 45916 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:59:52.001870 sshd-session[5602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:52.010606 systemd-logind[1687]: New session 15 of user core. Jan 23 18:59:52.014835 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 18:59:52.021209 kubelet[3150]: E0123 18:59:52.020920 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 18:59:52.549196 sshd[5605]: Connection closed by 10.200.16.10 port 45916 Jan 23 18:59:52.550618 sshd-session[5602]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:52.556221 systemd[1]: sshd@12-10.200.4.15:22-10.200.16.10:45916.service: Deactivated successfully. Jan 23 18:59:52.556750 systemd-logind[1687]: Session 15 logged out. Waiting for processes to exit. Jan 23 18:59:52.559391 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 18:59:52.563139 systemd-logind[1687]: Removed session 15. Jan 23 18:59:53.022969 containerd[1705]: time="2026-01-23T18:59:53.022868349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:59:53.291721 containerd[1705]: time="2026-01-23T18:59:53.291590141Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:53.294593 containerd[1705]: time="2026-01-23T18:59:53.294559071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:59:53.294653 containerd[1705]: time="2026-01-23T18:59:53.294640088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:59:53.294848 kubelet[3150]: E0123 18:59:53.294816 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:53.295125 kubelet[3150]: E0123 18:59:53.294858 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:53.295125 kubelet[3150]: E0123 18:59:53.294933 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d6498b48f-btb4t_calico-apiserver(3cd71195-3bad-4889-98d9-2a171485ef11): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:53.295125 kubelet[3150]: E0123 18:59:53.294965 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 18:59:54.022908 kubelet[3150]: E0123 18:59:54.022865 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 18:59:56.021135 containerd[1705]: time="2026-01-23T18:59:56.021097979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:59:56.290399 containerd[1705]: time="2026-01-23T18:59:56.290269237Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:56.293155 containerd[1705]: time="2026-01-23T18:59:56.293119737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:59:56.293234 containerd[1705]: time="2026-01-23T18:59:56.293130315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:59:56.293397 kubelet[3150]: E0123 18:59:56.293361 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:56.293755 kubelet[3150]: E0123 18:59:56.293414 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:56.293755 kubelet[3150]: E0123 18:59:56.293494 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-d6498b48f-qmxqv_calico-apiserver(5141cf66-e083-4b00-b023-85272b62f472): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:56.293755 kubelet[3150]: E0123 18:59:56.293531 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 18:59:57.023317 containerd[1705]: time="2026-01-23T18:59:57.023073657Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 18:59:57.313990 containerd[1705]: time="2026-01-23T18:59:57.313875551Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:57.317777 containerd[1705]: time="2026-01-23T18:59:57.317667180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 18:59:57.318032 containerd[1705]: time="2026-01-23T18:59:57.317867419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 18:59:57.318569 kubelet[3150]: E0123 18:59:57.318250 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:57.318569 kubelet[3150]: E0123 18:59:57.318297 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 18:59:57.318569 kubelet[3150]: E0123 18:59:57.318486 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-67858cdf98-r62lm_calico-apiserver(5f08c0b6-1683-46fd-9e2c-63a438d878d9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:57.320714 containerd[1705]: time="2026-01-23T18:59:57.319593668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 18:59:57.321034 kubelet[3150]: E0123 18:59:57.318523 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 18:59:57.660774 systemd[1]: Started sshd@13-10.200.4.15:22-10.200.16.10:45930.service - OpenSSH per-connection server daemon (10.200.16.10:45930). Jan 23 18:59:57.953262 containerd[1705]: time="2026-01-23T18:59:57.953101953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:57.964652 containerd[1705]: time="2026-01-23T18:59:57.964579577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 18:59:57.964954 containerd[1705]: time="2026-01-23T18:59:57.964704138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 18:59:57.965340 kubelet[3150]: E0123 18:59:57.965051 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:59:57.965340 kubelet[3150]: E0123 18:59:57.965100 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 18:59:57.965857 kubelet[3150]: E0123 18:59:57.965666 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-9ffdfbd54-jwxd4_calico-system(485a4546-1705-4b94-89b0-63e51dc5fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:57.967468 containerd[1705]: time="2026-01-23T18:59:57.967432033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 18:59:58.233375 containerd[1705]: time="2026-01-23T18:59:58.233327094Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 18:59:58.236492 containerd[1705]: time="2026-01-23T18:59:58.236416507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 18:59:58.236492 containerd[1705]: time="2026-01-23T18:59:58.236473264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 18:59:58.236705 kubelet[3150]: E0123 18:59:58.236650 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:59:58.236765 kubelet[3150]: E0123 18:59:58.236712 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 18:59:58.236825 kubelet[3150]: E0123 18:59:58.236793 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-9ffdfbd54-jwxd4_calico-system(485a4546-1705-4b94-89b0-63e51dc5fdcb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 18:59:58.236885 kubelet[3150]: E0123 18:59:58.236848 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 18:59:58.292852 sshd[5648]: Accepted publickey for core from 10.200.16.10 port 45930 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 18:59:58.294030 sshd-session[5648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 18:59:58.298459 systemd-logind[1687]: New session 16 of user core. Jan 23 18:59:58.305819 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 18:59:58.800172 sshd[5651]: Connection closed by 10.200.16.10 port 45930 Jan 23 18:59:58.802862 sshd-session[5648]: pam_unix(sshd:session): session closed for user core Jan 23 18:59:58.807712 systemd[1]: sshd@13-10.200.4.15:22-10.200.16.10:45930.service: Deactivated successfully. Jan 23 18:59:58.808164 systemd-logind[1687]: Session 16 logged out. Waiting for processes to exit. Jan 23 18:59:58.811996 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 18:59:58.817537 systemd-logind[1687]: Removed session 16. Jan 23 19:00:03.023921 kubelet[3150]: E0123 19:00:03.023863 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 19:00:03.910942 systemd[1]: Started sshd@14-10.200.4.15:22-10.200.16.10:39674.service - OpenSSH per-connection server daemon (10.200.16.10:39674). Jan 23 19:00:04.539610 sshd[5662]: Accepted publickey for core from 10.200.16.10 port 39674 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:00:04.541553 sshd-session[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:04.547729 systemd-logind[1687]: New session 17 of user core. Jan 23 19:00:04.556038 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 19:00:05.023522 containerd[1705]: time="2026-01-23T19:00:05.023477167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 19:00:05.061489 sshd[5665]: Connection closed by 10.200.16.10 port 39674 Jan 23 19:00:05.062121 sshd-session[5662]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:05.065791 systemd[1]: sshd@14-10.200.4.15:22-10.200.16.10:39674.service: Deactivated successfully. Jan 23 19:00:05.065987 systemd-logind[1687]: Session 17 logged out. Waiting for processes to exit. Jan 23 19:00:05.069890 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 19:00:05.073340 systemd-logind[1687]: Removed session 17. Jan 23 19:00:06.188485 containerd[1705]: time="2026-01-23T19:00:06.188430832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:06.192645 containerd[1705]: time="2026-01-23T19:00:06.192607462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 19:00:06.192838 containerd[1705]: time="2026-01-23T19:00:06.192719114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 19:00:06.192918 kubelet[3150]: E0123 19:00:06.192881 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:00:06.193151 kubelet[3150]: E0123 19:00:06.192932 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 19:00:06.193151 kubelet[3150]: E0123 19:00:06.193022 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-8684856889-72bpc_calico-system(55e05950-1192-4962-a3ec-2be48b4bca2a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:06.193151 kubelet[3150]: E0123 19:00:06.193057 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 19:00:08.021712 kubelet[3150]: E0123 19:00:08.020699 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 19:00:09.021352 kubelet[3150]: E0123 19:00:09.021186 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 19:00:09.025082 containerd[1705]: time="2026-01-23T19:00:09.024986363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 19:00:10.171992 systemd[1]: Started sshd@15-10.200.4.15:22-10.200.16.10:45180.service - OpenSSH per-connection server daemon (10.200.16.10:45180). Jan 23 19:00:10.701070 containerd[1705]: time="2026-01-23T19:00:10.700999525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 19:00:10.704205 containerd[1705]: time="2026-01-23T19:00:10.704156642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 19:00:10.704373 containerd[1705]: time="2026-01-23T19:00:10.704159725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 19:00:10.704515 kubelet[3150]: E0123 19:00:10.704477 3150 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:00:10.704782 kubelet[3150]: E0123 19:00:10.704528 3150 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 19:00:10.704782 kubelet[3150]: E0123 19:00:10.704609 3150 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-6qgwf_calico-system(41eb4aaa-d676-42e0-9ada-ef9e2ad0459b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 19:00:10.704782 kubelet[3150]: E0123 19:00:10.704643 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 19:00:10.797707 sshd[5683]: Accepted publickey for core from 10.200.16.10 port 45180 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:00:10.800297 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:10.805403 systemd-logind[1687]: New session 18 of user core. Jan 23 19:00:10.811872 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 19:00:11.023610 kubelet[3150]: E0123 19:00:11.023565 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 19:00:11.320452 sshd[5686]: Connection closed by 10.200.16.10 port 45180 Jan 23 19:00:11.323829 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:11.328849 systemd[1]: sshd@15-10.200.4.15:22-10.200.16.10:45180.service: Deactivated successfully. Jan 23 19:00:11.332809 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 19:00:11.334861 systemd-logind[1687]: Session 18 logged out. Waiting for processes to exit. Jan 23 19:00:11.338475 systemd-logind[1687]: Removed session 18. Jan 23 19:00:11.434923 systemd[1]: Started sshd@16-10.200.4.15:22-10.200.16.10:45184.service - OpenSSH per-connection server daemon (10.200.16.10:45184). Jan 23 19:00:12.070074 sshd[5698]: Accepted publickey for core from 10.200.16.10 port 45184 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:00:12.071225 sshd-session[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:12.075726 systemd-logind[1687]: New session 19 of user core. Jan 23 19:00:12.080823 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 19:00:12.574329 sshd[5716]: Connection closed by 10.200.16.10 port 45184 Jan 23 19:00:12.574912 sshd-session[5698]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:12.578424 systemd[1]: sshd@16-10.200.4.15:22-10.200.16.10:45184.service: Deactivated successfully. Jan 23 19:00:12.580590 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 19:00:12.581694 systemd-logind[1687]: Session 19 logged out. Waiting for processes to exit. Jan 23 19:00:12.582930 systemd-logind[1687]: Removed session 19. Jan 23 19:00:12.683886 systemd[1]: Started sshd@17-10.200.4.15:22-10.200.16.10:45190.service - OpenSSH per-connection server daemon (10.200.16.10:45190). Jan 23 19:00:13.026265 kubelet[3150]: E0123 19:00:13.026158 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 19:00:13.306010 sshd[5726]: Accepted publickey for core from 10.200.16.10 port 45190 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:00:13.307990 sshd-session[5726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:13.315363 systemd-logind[1687]: New session 20 of user core. Jan 23 19:00:13.324061 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 19:00:14.277419 sshd[5729]: Connection closed by 10.200.16.10 port 45190 Jan 23 19:00:14.278991 sshd-session[5726]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:14.282237 systemd-logind[1687]: Session 20 logged out. Waiting for processes to exit. Jan 23 19:00:14.282853 systemd[1]: sshd@17-10.200.4.15:22-10.200.16.10:45190.service: Deactivated successfully. Jan 23 19:00:14.284629 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 19:00:14.286499 systemd-logind[1687]: Removed session 20. Jan 23 19:00:14.389596 systemd[1]: Started sshd@18-10.200.4.15:22-10.200.16.10:45204.service - OpenSSH per-connection server daemon (10.200.16.10:45204). Jan 23 19:00:15.027030 sshd[5744]: Accepted publickey for core from 10.200.16.10 port 45204 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:00:15.028835 sshd-session[5744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:15.036926 systemd-logind[1687]: New session 21 of user core. Jan 23 19:00:15.041837 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 19:00:15.604219 sshd[5747]: Connection closed by 10.200.16.10 port 45204 Jan 23 19:00:15.605461 sshd-session[5744]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:15.609747 systemd-logind[1687]: Session 21 logged out. Waiting for processes to exit. Jan 23 19:00:15.610876 systemd[1]: sshd@18-10.200.4.15:22-10.200.16.10:45204.service: Deactivated successfully. Jan 23 19:00:15.614416 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 19:00:15.617383 systemd-logind[1687]: Removed session 21. Jan 23 19:00:15.717018 systemd[1]: Started sshd@19-10.200.4.15:22-10.200.16.10:45214.service - OpenSSH per-connection server daemon (10.200.16.10:45214). Jan 23 19:00:16.343880 sshd[5759]: Accepted publickey for core from 10.200.16.10 port 45214 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:00:16.347494 sshd-session[5759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:16.356920 systemd-logind[1687]: New session 22 of user core. Jan 23 19:00:16.361851 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 19:00:16.854248 sshd[5762]: Connection closed by 10.200.16.10 port 45214 Jan 23 19:00:16.854903 sshd-session[5759]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:16.861060 systemd[1]: sshd@19-10.200.4.15:22-10.200.16.10:45214.service: Deactivated successfully. Jan 23 19:00:16.863296 systemd-logind[1687]: Session 22 logged out. Waiting for processes to exit. Jan 23 19:00:16.864527 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 19:00:16.868274 systemd-logind[1687]: Removed session 22. Jan 23 19:00:18.023565 kubelet[3150]: E0123 19:00:18.023498 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 19:00:19.022328 kubelet[3150]: E0123 19:00:19.021995 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 19:00:21.964111 systemd[1]: Started sshd@20-10.200.4.15:22-10.200.16.10:39428.service - OpenSSH per-connection server daemon (10.200.16.10:39428). Jan 23 19:00:22.021790 kubelet[3150]: E0123 19:00:22.021491 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 19:00:22.592627 sshd[5778]: Accepted publickey for core from 10.200.16.10 port 39428 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:00:22.594480 sshd-session[5778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:22.599745 systemd-logind[1687]: New session 23 of user core. Jan 23 19:00:22.605863 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 19:00:23.023435 kubelet[3150]: E0123 19:00:23.023183 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 19:00:23.102882 sshd[5781]: Connection closed by 10.200.16.10 port 39428 Jan 23 19:00:23.103874 sshd-session[5778]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:23.107058 systemd[1]: sshd@20-10.200.4.15:22-10.200.16.10:39428.service: Deactivated successfully. Jan 23 19:00:23.108661 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 19:00:23.109718 systemd-logind[1687]: Session 23 logged out. Waiting for processes to exit. Jan 23 19:00:23.111195 systemd-logind[1687]: Removed session 23. Jan 23 19:00:24.022342 kubelet[3150]: E0123 19:00:24.022281 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 19:00:25.023148 kubelet[3150]: E0123 19:00:25.023100 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 19:00:26.022954 kubelet[3150]: E0123 19:00:26.022896 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 19:00:28.229251 systemd[1]: Started sshd@21-10.200.4.15:22-10.200.16.10:39438.service - OpenSSH per-connection server daemon (10.200.16.10:39438). Jan 23 19:00:28.859603 sshd[5818]: Accepted publickey for core from 10.200.16.10 port 39438 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:00:28.860988 sshd-session[5818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:28.868070 systemd-logind[1687]: New session 24 of user core. Jan 23 19:00:28.873434 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 19:00:29.377735 sshd[5821]: Connection closed by 10.200.16.10 port 39438 Jan 23 19:00:29.378756 sshd-session[5818]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:29.384637 systemd[1]: sshd@21-10.200.4.15:22-10.200.16.10:39438.service: Deactivated successfully. Jan 23 19:00:29.388535 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 19:00:29.390754 systemd-logind[1687]: Session 24 logged out. Waiting for processes to exit. Jan 23 19:00:29.393696 systemd-logind[1687]: Removed session 24. Jan 23 19:00:30.021050 kubelet[3150]: E0123 19:00:30.020920 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 19:00:30.023365 kubelet[3150]: E0123 19:00:30.023321 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 19:00:34.485036 systemd[1]: Started sshd@22-10.200.4.15:22-10.200.16.10:56306.service - OpenSSH per-connection server daemon (10.200.16.10:56306). Jan 23 19:00:35.101707 sshd[5832]: Accepted publickey for core from 10.200.16.10 port 56306 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:00:35.104127 sshd-session[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:35.119827 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 19:00:35.121009 systemd-logind[1687]: New session 25 of user core. Jan 23 19:00:35.627373 sshd[5835]: Connection closed by 10.200.16.10 port 56306 Jan 23 19:00:35.629872 sshd-session[5832]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:35.634925 systemd-logind[1687]: Session 25 logged out. Waiting for processes to exit. Jan 23 19:00:35.635320 systemd[1]: sshd@22-10.200.4.15:22-10.200.16.10:56306.service: Deactivated successfully. Jan 23 19:00:35.639003 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 19:00:35.642081 systemd-logind[1687]: Removed session 25. Jan 23 19:00:36.020769 kubelet[3150]: E0123 19:00:36.020722 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 19:00:37.026276 kubelet[3150]: E0123 19:00:37.026206 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb" Jan 23 19:00:37.028413 kubelet[3150]: E0123 19:00:37.028342 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-btb4t" podUID="3cd71195-3bad-4889-98d9-2a171485ef11" Jan 23 19:00:39.022701 kubelet[3150]: E0123 19:00:39.022642 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-6qgwf" podUID="41eb4aaa-d676-42e0-9ada-ef9e2ad0459b" Jan 23 19:00:39.023172 kubelet[3150]: E0123 19:00:39.023035 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-d6498b48f-qmxqv" podUID="5141cf66-e083-4b00-b023-85272b62f472" Jan 23 19:00:40.744059 systemd[1]: Started sshd@23-10.200.4.15:22-10.200.16.10:35368.service - OpenSSH per-connection server daemon (10.200.16.10:35368). Jan 23 19:00:41.363900 sshd[5846]: Accepted publickey for core from 10.200.16.10 port 35368 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:00:41.365504 sshd-session[5846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:41.371560 systemd-logind[1687]: New session 26 of user core. Jan 23 19:00:41.378851 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 19:00:41.868596 sshd[5849]: Connection closed by 10.200.16.10 port 35368 Jan 23 19:00:41.869507 sshd-session[5846]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:41.878039 systemd[1]: sshd@23-10.200.4.15:22-10.200.16.10:35368.service: Deactivated successfully. Jan 23 19:00:41.880956 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 19:00:41.885796 systemd-logind[1687]: Session 26 logged out. Waiting for processes to exit. Jan 23 19:00:41.887259 systemd-logind[1687]: Removed session 26. Jan 23 19:00:43.023519 kubelet[3150]: E0123 19:00:43.023387 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w8lzq" podUID="ff2219dc-84d2-48b9-9cbc-2450678772ac" Jan 23 19:00:44.020259 kubelet[3150]: E0123 19:00:44.020201 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-8684856889-72bpc" podUID="55e05950-1192-4962-a3ec-2be48b4bca2a" Jan 23 19:00:46.981929 systemd[1]: Started sshd@24-10.200.4.15:22-10.200.16.10:35380.service - OpenSSH per-connection server daemon (10.200.16.10:35380). Jan 23 19:00:47.022252 kubelet[3150]: E0123 19:00:47.022218 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-67858cdf98-r62lm" podUID="5f08c0b6-1683-46fd-9e2c-63a438d878d9" Jan 23 19:00:47.604087 sshd[5863]: Accepted publickey for core from 10.200.16.10 port 35380 ssh2: RSA SHA256:zaCd/K7VMGze04A9395FnuTfftxugHl3wO01qWoSs+w Jan 23 19:00:47.605186 sshd-session[5863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 19:00:47.610023 systemd-logind[1687]: New session 27 of user core. Jan 23 19:00:47.616052 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 23 19:00:48.156506 sshd[5867]: Connection closed by 10.200.16.10 port 35380 Jan 23 19:00:48.157150 sshd-session[5863]: pam_unix(sshd:session): session closed for user core Jan 23 19:00:48.160369 systemd-logind[1687]: Session 27 logged out. Waiting for processes to exit. Jan 23 19:00:48.162041 systemd[1]: sshd@24-10.200.4.15:22-10.200.16.10:35380.service: Deactivated successfully. Jan 23 19:00:48.164761 systemd[1]: session-27.scope: Deactivated successfully. Jan 23 19:00:48.168048 systemd-logind[1687]: Removed session 27. Jan 23 19:00:50.021800 kubelet[3150]: E0123 19:00:50.021725 3150 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-9ffdfbd54-jwxd4" podUID="485a4546-1705-4b94-89b0-63e51dc5fdcb"