Jan 24 00:46:46.053207 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:46:46.053235 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:46:46.053247 kernel: BIOS-provided physical RAM map: Jan 24 00:46:46.053253 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:46:46.053259 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 24 00:46:46.053268 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 24 00:46:46.053275 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 24 00:46:46.053286 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 24 00:46:46.053294 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 24 00:46:46.053303 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 24 00:46:46.053311 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 24 00:46:46.053319 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 24 00:46:46.053328 kernel: printk: bootconsole [earlyser0] enabled Jan 24 00:46:46.053335 kernel: NX (Execute Disable) protection: active Jan 24 00:46:46.053348 kernel: APIC: Static calls initialized Jan 24 00:46:46.053357 kernel: efi: EFI v2.7 by Microsoft Jan 24 00:46:46.053364 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Jan 24 00:46:46.055418 kernel: SMBIOS 3.1.0 present. Jan 24 00:46:46.055437 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 24 00:46:46.055451 kernel: Hypervisor detected: Microsoft Hyper-V Jan 24 00:46:46.055464 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 24 00:46:46.055478 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jan 24 00:46:46.055490 kernel: Hyper-V: Nested features: 0x1e0101 Jan 24 00:46:46.055501 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 24 00:46:46.055519 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 24 00:46:46.055532 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:46:46.055544 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:46:46.055558 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 24 00:46:46.055571 kernel: tsc: Detected 2593.909 MHz processor Jan 24 00:46:46.055585 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:46:46.055599 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:46:46.055611 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 24 00:46:46.055624 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:46:46.055640 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:46:46.055654 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 24 00:46:46.055666 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 24 00:46:46.055678 kernel: Using GB pages for direct mapping Jan 24 00:46:46.055693 kernel: Secure boot disabled Jan 24 00:46:46.055705 kernel: ACPI: Early table checksum verification disabled Jan 24 00:46:46.055718 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 24 00:46:46.055738 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055755 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055770 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 24 00:46:46.055784 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 24 00:46:46.055799 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055814 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055829 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055846 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055861 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055875 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055890 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055905 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 24 00:46:46.055920 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 24 00:46:46.055934 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 24 00:46:46.055949 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 24 00:46:46.055967 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 24 00:46:46.055982 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 24 00:46:46.055996 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 24 00:46:46.056009 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 24 00:46:46.056022 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 24 00:46:46.056036 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 24 00:46:46.056049 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:46:46.056061 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:46:46.056072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 24 00:46:46.056085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 24 00:46:46.056097 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 24 00:46:46.056106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 24 00:46:46.056116 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 24 00:46:46.056126 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 24 00:46:46.056137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 24 00:46:46.056145 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 24 00:46:46.056154 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 24 00:46:46.056163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 24 00:46:46.056175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 24 00:46:46.056185 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 24 00:46:46.056195 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 24 00:46:46.056205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 24 00:46:46.056216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 24 00:46:46.056225 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 24 00:46:46.056238 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 24 00:46:46.056249 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 24 00:46:46.056262 kernel: Zone ranges: Jan 24 00:46:46.056278 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:46:46.056291 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:46:46.056302 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:46:46.056313 kernel: Movable zone start for each node Jan 24 00:46:46.056326 kernel: Early memory node ranges Jan 24 00:46:46.056340 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:46:46.056352 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 24 00:46:46.056365 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 24 00:46:46.056400 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:46:46.056418 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 24 00:46:46.056432 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:46:46.056444 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:46:46.056458 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 24 00:46:46.056472 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 24 00:46:46.056486 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 24 00:46:46.056500 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:46:46.056514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:46:46.056528 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:46:46.056546 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 24 00:46:46.056560 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:46:46.056574 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 24 00:46:46.056588 kernel: Booting paravirtualized kernel on Hyper-V Jan 24 00:46:46.056602 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:46:46.056617 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:46:46.056631 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:46:46.056645 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:46:46.056659 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:46:46.056676 kernel: Hyper-V: PV spinlocks enabled Jan 24 00:46:46.056690 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:46:46.056706 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:46:46.056720 kernel: random: crng init done Jan 24 00:46:46.056734 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 24 00:46:46.056748 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:46:46.056762 kernel: Fallback order for Node 0: 0 Jan 24 00:46:46.056777 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 24 00:46:46.056794 kernel: Policy zone: Normal Jan 24 00:46:46.056819 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:46:46.056834 kernel: software IO TLB: area num 2. Jan 24 00:46:46.056852 kernel: Memory: 8077084K/8387460K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 310116K reserved, 0K cma-reserved) Jan 24 00:46:46.056868 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:46:46.056883 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:46:46.056898 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:46:46.056913 kernel: Dynamic Preempt: voluntary Jan 24 00:46:46.056928 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:46:46.056944 kernel: rcu: RCU event tracing is enabled. Jan 24 00:46:46.056962 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:46:46.056978 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:46:46.056993 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:46:46.057008 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:46:46.057024 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:46:46.057039 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:46:46.057058 kernel: Using NULL legacy PIC Jan 24 00:46:46.057073 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 24 00:46:46.057088 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:46:46.057104 kernel: Console: colour dummy device 80x25 Jan 24 00:46:46.057119 kernel: printk: console [tty1] enabled Jan 24 00:46:46.057135 kernel: printk: console [ttyS0] enabled Jan 24 00:46:46.057150 kernel: printk: bootconsole [earlyser0] disabled Jan 24 00:46:46.057165 kernel: ACPI: Core revision 20230628 Jan 24 00:46:46.057180 kernel: Failed to register legacy timer interrupt Jan 24 00:46:46.057195 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:46:46.057213 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 24 00:46:46.057228 kernel: Hyper-V: Using IPI hypercalls Jan 24 00:46:46.057243 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 24 00:46:46.057258 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 24 00:46:46.057273 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 24 00:46:46.057289 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 24 00:46:46.057304 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 24 00:46:46.057319 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 24 00:46:46.057334 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593909) Jan 24 00:46:46.057353 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:46:46.057368 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:46:46.057395 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:46:46.057411 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:46:46.057425 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:46:46.057441 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:46:46.057455 kernel: RETBleed: Vulnerable Jan 24 00:46:46.057470 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:46:46.057486 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:46:46.057501 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:46:46.057519 kernel: active return thunk: its_return_thunk Jan 24 00:46:46.057535 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:46:46.057550 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:46:46.057565 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:46:46.057581 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:46:46.057596 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:46:46.057611 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:46:46.057625 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:46:46.057640 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:46:46.057656 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:46:46.057671 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:46:46.057688 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:46:46.057703 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 24 00:46:46.057718 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:46:46.057733 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:46:46.057749 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:46:46.057764 kernel: landlock: Up and running. Jan 24 00:46:46.057779 kernel: SELinux: Initializing. Jan 24 00:46:46.057794 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:46:46.057809 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:46:46.057824 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:46:46.057840 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:46:46.057858 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:46:46.057873 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:46:46.057889 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:46:46.057904 kernel: signal: max sigframe size: 3632 Jan 24 00:46:46.057920 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:46:46.057936 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:46:46.057951 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:46:46.057966 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:46:46.057982 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:46:46.058000 kernel: .... node #0, CPUs: #1 Jan 24 00:46:46.058016 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 24 00:46:46.058032 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:46:46.058047 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:46:46.058062 kernel: smpboot: Max logical packages: 1 Jan 24 00:46:46.058077 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Jan 24 00:46:46.058093 kernel: devtmpfs: initialized Jan 24 00:46:46.058108 kernel: x86/mm: Memory block size: 128MB Jan 24 00:46:46.058126 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 24 00:46:46.058142 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:46:46.058158 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:46:46.058173 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:46:46.058188 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:46:46.058204 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:46:46.058219 kernel: audit: type=2000 audit(1769215604.029:1): state=initialized audit_enabled=0 res=1 Jan 24 00:46:46.058234 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:46:46.058249 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:46:46.058267 kernel: cpuidle: using governor menu Jan 24 00:46:46.058283 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:46:46.058297 kernel: dca service started, version 1.12.1 Jan 24 00:46:46.058313 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 24 00:46:46.058328 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:46:46.058344 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:46:46.058358 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:46:46.058391 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:46:46.058406 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:46:46.058425 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:46:46.058440 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:46:46.058456 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:46:46.058471 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:46:46.058485 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:46:46.058500 kernel: ACPI: Interpreter enabled Jan 24 00:46:46.058516 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:46:46.058531 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:46:46.058546 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:46:46.058564 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 24 00:46:46.058580 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 24 00:46:46.058595 kernel: iommu: Default domain type: Translated Jan 24 00:46:46.058610 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:46:46.058626 kernel: efivars: Registered efivars operations Jan 24 00:46:46.058641 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:46:46.058657 kernel: PCI: System does not support PCI Jan 24 00:46:46.058672 kernel: vgaarb: loaded Jan 24 00:46:46.058687 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 24 00:46:46.058705 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:46:46.058720 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:46:46.058735 kernel: pnp: PnP ACPI init Jan 24 00:46:46.058750 kernel: pnp: PnP ACPI: found 3 devices Jan 24 00:46:46.058765 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:46:46.058780 kernel: NET: Registered PF_INET protocol family Jan 24 00:46:46.058796 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:46:46.058811 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 24 00:46:46.058827 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:46:46.058844 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:46:46.058859 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 24 00:46:46.058875 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 24 00:46:46.058890 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:46:46.058905 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:46:46.058920 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:46:46.058934 kernel: NET: Registered PF_XDP protocol family Jan 24 00:46:46.058947 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:46:46.058961 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:46:46.058979 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Jan 24 00:46:46.058990 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:46:46.059006 kernel: Initialise system trusted keyrings Jan 24 00:46:46.059023 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 24 00:46:46.059041 kernel: Key type asymmetric registered Jan 24 00:46:46.059057 kernel: Asymmetric key parser 'x509' registered Jan 24 00:46:46.059075 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:46:46.059087 kernel: io scheduler mq-deadline registered Jan 24 00:46:46.059101 kernel: io scheduler kyber registered Jan 24 00:46:46.059118 kernel: io scheduler bfq registered Jan 24 00:46:46.059132 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:46:46.059146 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:46:46.059161 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:46:46.059175 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 24 00:46:46.059189 kernel: i8042: PNP: No PS/2 controller found. Jan 24 00:46:46.059400 kernel: rtc_cmos 00:02: registered as rtc0 Jan 24 00:46:46.059534 kernel: rtc_cmos 00:02: setting system clock to 2026-01-24T00:46:45 UTC (1769215605) Jan 24 00:46:46.059653 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 24 00:46:46.059671 kernel: intel_pstate: CPU model not supported Jan 24 00:46:46.059685 kernel: efifb: probing for efifb Jan 24 00:46:46.059699 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 24 00:46:46.059713 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 24 00:46:46.059728 kernel: efifb: scrolling: redraw Jan 24 00:46:46.059742 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:46:46.059756 kernel: Console: switching to colour frame buffer device 128x48 Jan 24 00:46:46.059770 kernel: fb0: EFI VGA frame buffer device Jan 24 00:46:46.059787 kernel: pstore: Using crash dump compression: deflate Jan 24 00:46:46.059801 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:46:46.059815 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:46:46.059829 kernel: Segment Routing with IPv6 Jan 24 00:46:46.059843 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:46:46.059857 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:46:46.059871 kernel: Key type dns_resolver registered Jan 24 00:46:46.059885 kernel: IPI shorthand broadcast: enabled Jan 24 00:46:46.059899 kernel: sched_clock: Marking stable (874003000, 55480500)->(1206959300, -277475800) Jan 24 00:46:46.059915 kernel: registered taskstats version 1 Jan 24 00:46:46.059929 kernel: Loading compiled-in X.509 certificates Jan 24 00:46:46.059943 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:46:46.059957 kernel: Key type .fscrypt registered Jan 24 00:46:46.059971 kernel: Key type fscrypt-provisioning registered Jan 24 00:46:46.059985 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:46:46.059999 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:46:46.060013 kernel: ima: No architecture policies found Jan 24 00:46:46.060027 kernel: clk: Disabling unused clocks Jan 24 00:46:46.060045 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:46:46.060059 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:46:46.060073 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:46:46.060087 kernel: Run /init as init process Jan 24 00:46:46.060101 kernel: with arguments: Jan 24 00:46:46.060114 kernel: /init Jan 24 00:46:46.060128 kernel: with environment: Jan 24 00:46:46.060141 kernel: HOME=/ Jan 24 00:46:46.060155 kernel: TERM=linux Jan 24 00:46:46.060174 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:46:46.060191 systemd[1]: Detected virtualization microsoft. Jan 24 00:46:46.060206 systemd[1]: Detected architecture x86-64. Jan 24 00:46:46.060220 systemd[1]: Running in initrd. Jan 24 00:46:46.060238 systemd[1]: No hostname configured, using default hostname. Jan 24 00:46:46.060253 systemd[1]: Hostname set to . Jan 24 00:46:46.060268 systemd[1]: Initializing machine ID from random generator. Jan 24 00:46:46.060286 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:46:46.060300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:46:46.060315 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:46:46.060331 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:46:46.060346 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:46:46.060361 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:46:46.060387 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:46:46.060408 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:46:46.060423 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:46:46.060438 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:46:46.060453 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:46:46.060468 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:46:46.060483 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:46:46.060498 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:46:46.060513 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:46:46.060531 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:46:46.060546 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:46:46.060561 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:46:46.060576 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:46:46.060591 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:46:46.060606 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:46:46.060621 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:46:46.060635 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:46:46.060650 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:46:46.060668 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:46:46.060682 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:46:46.060697 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:46:46.060712 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:46:46.060727 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:46:46.060762 systemd-journald[177]: Collecting audit messages is disabled. Jan 24 00:46:46.060797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:46:46.060812 systemd-journald[177]: Journal started Jan 24 00:46:46.060842 systemd-journald[177]: Runtime Journal (/run/log/journal/615af8d8c88d487b947e1af634ca4322) is 8.0M, max 158.8M, 150.8M free. Jan 24 00:46:46.065640 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:46:46.074434 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:46:46.072564 systemd-modules-load[178]: Inserted module 'overlay' Jan 24 00:46:46.075131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:46:46.082174 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:46:46.095605 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:46:46.110556 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:46:46.120407 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:46:46.133491 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:46:46.133517 kernel: Bridge firewalling registered Jan 24 00:46:46.130084 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:46:46.130442 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 24 00:46:46.148565 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:46:46.154225 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:46:46.159650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:46:46.165779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:46:46.177034 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:46:46.189546 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:46:46.199589 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:46:46.205430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:46:46.211176 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:46:46.217421 dracut-cmdline[209]: dracut-dracut-053 Jan 24 00:46:46.219614 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:46:46.235626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:46:46.276337 systemd-resolved[224]: Positive Trust Anchors: Jan 24 00:46:46.276352 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:46:46.278985 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:46:46.302945 systemd-resolved[224]: Defaulting to hostname 'linux'. Jan 24 00:46:46.306464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:46:46.312043 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:46:46.323393 kernel: SCSI subsystem initialized Jan 24 00:46:46.333389 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:46:46.344390 kernel: iscsi: registered transport (tcp) Jan 24 00:46:46.365027 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:46:46.365094 kernel: QLogic iSCSI HBA Driver Jan 24 00:46:46.401249 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:46:46.411522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:46:46.438228 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:46:46.438298 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:46:46.441721 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:46:46.481396 kernel: raid6: avx512x4 gen() 18291 MB/s Jan 24 00:46:46.500386 kernel: raid6: avx512x2 gen() 18372 MB/s Jan 24 00:46:46.518384 kernel: raid6: avx512x1 gen() 18329 MB/s Jan 24 00:46:46.537382 kernel: raid6: avx2x4 gen() 18333 MB/s Jan 24 00:46:46.556387 kernel: raid6: avx2x2 gen() 18333 MB/s Jan 24 00:46:46.575927 kernel: raid6: avx2x1 gen() 13543 MB/s Jan 24 00:46:46.575967 kernel: raid6: using algorithm avx512x2 gen() 18372 MB/s Jan 24 00:46:46.597017 kernel: raid6: .... xor() 30447 MB/s, rmw enabled Jan 24 00:46:46.597050 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:46:46.620396 kernel: xor: automatically using best checksumming function avx Jan 24 00:46:46.766398 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:46:46.776397 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:46:46.786538 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:46:46.799532 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 24 00:46:46.803980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:46:46.817546 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:46:46.831098 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jan 24 00:46:46.856599 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:46:46.865522 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:46:46.907045 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:46:46.921549 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:46:46.939531 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:46:46.948221 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:46:46.954625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:46:46.960488 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:46:46.970345 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:46:46.992395 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:46:47.003119 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:46:47.027985 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:46:47.028033 kernel: AES CTR mode by8 optimization enabled Jan 24 00:46:47.029621 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:46:47.033256 kernel: hv_vmbus: Vmbus version:5.2 Jan 24 00:46:47.033284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:46:47.041214 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:46:47.044003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:46:47.049613 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:46:46.985993 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 24 00:46:46.993942 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 24 00:46:46.993971 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 24 00:46:46.993988 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 24 00:46:46.994005 kernel: PTP clock support registered Jan 24 00:46:46.994021 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:46:46.994037 kernel: hv_utils: Registering HyperV Utility Driver Jan 24 00:46:46.994058 kernel: hv_vmbus: registering driver hv_utils Jan 24 00:46:46.994075 kernel: hv_utils: Heartbeat IC version 3.0 Jan 24 00:46:46.994094 kernel: hv_utils: TimeSync IC version 4.0 Jan 24 00:46:46.994111 kernel: hv_utils: Shutdown IC version 3.2 Jan 24 00:46:46.994129 systemd-journald[177]: Time jumped backwards, rotating. Jan 24 00:46:46.994201 kernel: hv_vmbus: registering driver hv_netvsc Jan 24 00:46:47.056970 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:46:47.002063 kernel: hv_vmbus: registering driver hid_hyperv Jan 24 00:46:47.002079 kernel: hv_vmbus: registering driver hv_storvsc Jan 24 00:46:46.971383 systemd-resolved[224]: Clock change detected. Flushing caches. Jan 24 00:46:47.001098 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:46:47.014189 kernel: scsi host1: storvsc_host_t Jan 24 00:46:47.021621 kernel: scsi host0: storvsc_host_t Jan 24 00:46:47.021681 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 24 00:46:47.027177 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 24 00:46:47.027221 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 24 00:46:47.032012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:46:47.032432 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:46:47.044387 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 24 00:46:47.049408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:46:47.073168 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:46:47.083966 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 24 00:46:47.084202 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:46:47.084222 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 24 00:46:47.087325 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:46:47.110783 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 24 00:46:47.111053 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 24 00:46:47.118256 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:46:47.118477 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 24 00:46:47.118646 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 24 00:46:47.126165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:46:47.136963 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:46:47.144971 kernel: hv_netvsc 7ced8d9c-f6de-7ced-8d9c-f6de7ced8d9c eth0: VF slot 1 added Jan 24 00:46:47.145186 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:46:47.161819 kernel: hv_vmbus: registering driver hv_pci Jan 24 00:46:47.161912 kernel: hv_pci a747f1c2-341a-4a29-a0a8-57577dad7fe1: PCI VMBus probing: Using version 0x10004 Jan 24 00:46:47.171159 kernel: hv_pci a747f1c2-341a-4a29-a0a8-57577dad7fe1: PCI host bridge to bus 341a:00 Jan 24 00:46:47.171310 kernel: pci_bus 341a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 24 00:46:47.171439 kernel: pci_bus 341a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 24 00:46:47.176273 kernel: pci 341a:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 24 00:46:47.183248 kernel: pci 341a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:46:47.187242 kernel: pci 341a:00:02.0: enabling Extended Tags Jan 24 00:46:47.196160 kernel: pci 341a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 341a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 24 00:46:47.202165 kernel: pci_bus 341a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 24 00:46:47.202421 kernel: pci 341a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:46:47.367995 kernel: mlx5_core 341a:00:02.0: enabling device (0000 -> 0002) Jan 24 00:46:47.372173 kernel: mlx5_core 341a:00:02.0: firmware version: 14.30.5026 Jan 24 00:46:47.584486 kernel: hv_netvsc 7ced8d9c-f6de-7ced-8d9c-f6de7ced8d9c eth0: VF registering: eth1 Jan 24 00:46:47.584814 kernel: mlx5_core 341a:00:02.0 eth1: joined to eth0 Jan 24 00:46:47.588205 kernel: mlx5_core 341a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 24 00:46:47.597178 kernel: mlx5_core 341a:00:02.0 enP13338s1: renamed from eth1 Jan 24 00:46:47.690803 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 24 00:46:47.709170 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (465) Jan 24 00:46:47.724456 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 24 00:46:47.760174 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (444) Jan 24 00:46:47.775535 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 24 00:46:47.781697 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 24 00:46:47.794106 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 24 00:46:47.804294 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:46:47.818165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:46:47.827160 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:46:47.834165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:46:48.039211 (udev-worker)[449]: sda9: Failed to create/update device symlink '/dev/disk/by-partlabel/ROOT', ignoring: No such file or directory Jan 24 00:46:48.837987 disk-uuid[605]: The operation has completed successfully. Jan 24 00:46:48.843447 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:46:48.925629 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:46:48.925740 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:46:48.945302 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:46:48.951699 sh[718]: Success Jan 24 00:46:48.989520 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:46:49.302688 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:46:49.319275 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:46:49.324471 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:46:49.359163 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:46:49.359210 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:46:49.364035 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:46:49.366783 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:46:49.369334 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:46:49.750864 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:46:49.753289 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:46:49.762375 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:46:49.771055 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:46:49.784678 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:46:49.784733 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:46:49.786699 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:46:49.825168 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:46:49.837924 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:46:49.845166 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:46:49.854561 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:46:49.867622 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:46:49.881165 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:46:49.892287 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:46:49.913169 systemd-networkd[902]: lo: Link UP Jan 24 00:46:49.913177 systemd-networkd[902]: lo: Gained carrier Jan 24 00:46:49.915379 systemd-networkd[902]: Enumeration completed Jan 24 00:46:49.915609 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:46:49.917862 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:46:49.917867 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:46:49.919768 systemd[1]: Reached target network.target - Network. Jan 24 00:46:49.980172 kernel: mlx5_core 341a:00:02.0 enP13338s1: Link up Jan 24 00:46:50.010171 kernel: hv_netvsc 7ced8d9c-f6de-7ced-8d9c-f6de7ced8d9c eth0: Data path switched to VF: enP13338s1 Jan 24 00:46:50.010935 systemd-networkd[902]: enP13338s1: Link UP Jan 24 00:46:50.011110 systemd-networkd[902]: eth0: Link UP Jan 24 00:46:50.011363 systemd-networkd[902]: eth0: Gained carrier Jan 24 00:46:50.011381 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:46:50.024379 systemd-networkd[902]: enP13338s1: Gained carrier Jan 24 00:46:50.087206 systemd-networkd[902]: eth0: DHCPv4 address 10.200.4.29/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:46:50.870895 ignition[886]: Ignition 2.19.0 Jan 24 00:46:50.870906 ignition[886]: Stage: fetch-offline Jan 24 00:46:50.870947 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:50.870958 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:50.871072 ignition[886]: parsed url from cmdline: "" Jan 24 00:46:50.871077 ignition[886]: no config URL provided Jan 24 00:46:50.871083 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:46:50.871095 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:46:50.871102 ignition[886]: failed to fetch config: resource requires networking Jan 24 00:46:50.878940 ignition[886]: Ignition finished successfully Jan 24 00:46:50.897952 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:46:50.908392 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:46:50.924995 ignition[912]: Ignition 2.19.0 Jan 24 00:46:50.925041 ignition[912]: Stage: fetch Jan 24 00:46:50.928294 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:50.928313 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:50.933261 ignition[912]: parsed url from cmdline: "" Jan 24 00:46:50.933344 ignition[912]: no config URL provided Jan 24 00:46:50.933359 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:46:50.933382 ignition[912]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:46:50.933414 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 24 00:46:51.033834 ignition[912]: GET result: OK Jan 24 00:46:51.033943 ignition[912]: config has been read from IMDS userdata Jan 24 00:46:51.033986 ignition[912]: parsing config with SHA512: 7ffbe8ee71d93018ed69e38147a0849b46d683232f061f5a3b88dbf9f1d869f633a07b24602365905bc78ef9414bb014eaa57c583162b14359df9b9a1621ddd7 Jan 24 00:46:51.039690 unknown[912]: fetched base config from "system" Jan 24 00:46:51.039994 ignition[912]: fetch: fetch complete Jan 24 00:46:51.039696 unknown[912]: fetched base config from "system" Jan 24 00:46:51.039998 ignition[912]: fetch: fetch passed Jan 24 00:46:51.039702 unknown[912]: fetched user config from "azure" Jan 24 00:46:51.040034 ignition[912]: Ignition finished successfully Jan 24 00:46:51.041967 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:46:51.054340 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:46:51.077642 ignition[919]: Ignition 2.19.0 Jan 24 00:46:51.077652 ignition[919]: Stage: kargs Jan 24 00:46:51.080674 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:46:51.077890 ignition[919]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:51.077905 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:51.079192 ignition[919]: kargs: kargs passed Jan 24 00:46:51.079239 ignition[919]: Ignition finished successfully Jan 24 00:46:51.093354 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:46:51.109643 ignition[925]: Ignition 2.19.0 Jan 24 00:46:51.109653 ignition[925]: Stage: disks Jan 24 00:46:51.112682 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:46:51.109874 ignition[925]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:51.118785 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:46:51.109888 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:51.121691 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:46:51.111189 ignition[925]: disks: disks passed Jan 24 00:46:51.126797 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:46:51.111235 ignition[925]: Ignition finished successfully Jan 24 00:46:51.129263 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:46:51.146820 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:46:51.156301 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:46:51.222024 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 24 00:46:51.227414 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:46:51.245262 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:46:51.315286 systemd-networkd[902]: eth0: Gained IPv6LL Jan 24 00:46:51.336406 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:46:51.336977 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:46:51.339651 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:46:51.377285 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:46:51.394609 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Jan 24 00:46:51.394659 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:46:51.396158 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:46:51.400020 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:46:51.407165 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:46:51.412243 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:46:51.417726 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 24 00:46:51.423945 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:46:51.423984 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:46:51.436220 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:46:51.438642 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:46:51.448328 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:46:52.213488 coreos-metadata[961]: Jan 24 00:46:52.213 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 24 00:46:52.218974 coreos-metadata[961]: Jan 24 00:46:52.218 INFO Fetch successful Jan 24 00:46:52.221681 coreos-metadata[961]: Jan 24 00:46:52.218 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 24 00:46:52.229685 coreos-metadata[961]: Jan 24 00:46:52.229 INFO Fetch successful Jan 24 00:46:52.235600 coreos-metadata[961]: Jan 24 00:46:52.229 INFO wrote hostname ci-4081.3.6-n-f1b70866be to /sysroot/etc/hostname Jan 24 00:46:52.231586 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:46:52.274577 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:46:52.360827 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:46:52.366194 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:46:52.371114 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:46:53.300797 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:46:53.317231 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:46:53.330638 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:46:53.336574 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:46:53.337295 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:46:53.408867 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:46:53.414045 ignition[1063]: INFO : Ignition 2.19.0 Jan 24 00:46:53.414045 ignition[1063]: INFO : Stage: mount Jan 24 00:46:53.417789 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:53.417789 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:53.423869 ignition[1063]: INFO : mount: mount passed Jan 24 00:46:53.425867 ignition[1063]: INFO : Ignition finished successfully Jan 24 00:46:53.426423 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:46:53.437278 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:46:53.445811 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:46:53.465162 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Jan 24 00:46:53.465196 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:46:53.468162 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:46:53.472325 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:46:53.478163 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:46:53.479940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:46:53.506917 ignition[1092]: INFO : Ignition 2.19.0 Jan 24 00:46:53.506917 ignition[1092]: INFO : Stage: files Jan 24 00:46:53.510842 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:53.510842 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:53.510842 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:46:53.520560 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:46:53.520560 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:46:53.622940 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:46:53.626623 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:46:53.626623 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:46:53.623520 unknown[1092]: wrote ssh authorized keys file for user: core Jan 24 00:46:53.636791 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:46:53.636791 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:46:53.699935 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:46:53.766088 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 24 00:46:54.058469 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:46:54.254192 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:46:54.254192 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:46:54.295548 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:46:54.302042 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:46:54.302042 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:46:54.310044 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:46:54.310044 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:46:54.317241 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:46:54.321383 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:46:54.325554 ignition[1092]: INFO : files: files passed Jan 24 00:46:54.325554 ignition[1092]: INFO : Ignition finished successfully Jan 24 00:46:54.328527 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:46:54.346300 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:46:54.352475 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:46:54.368010 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:46:54.368010 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:46:54.375795 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:46:54.372306 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:46:54.372408 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:46:54.387082 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:46:54.390966 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:46:54.402367 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:46:54.426220 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:46:54.426327 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:46:54.432066 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:46:54.437554 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:46:54.440323 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:46:54.458304 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:46:54.471238 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:46:54.486342 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:46:54.498819 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:46:54.504382 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:46:54.507366 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:46:54.512813 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:46:54.512964 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:46:54.523424 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:46:54.528593 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:46:54.531017 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:46:54.535604 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:46:54.538454 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:46:54.546581 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:46:54.549346 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:46:54.560440 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:46:54.565362 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:46:54.570320 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:46:54.572345 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:46:54.572463 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:46:54.576970 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:46:54.581935 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:46:54.587355 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:46:54.595178 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:46:54.601542 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:46:54.601711 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:46:54.608799 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:46:54.608941 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:46:54.613598 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:46:54.613748 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:46:54.618767 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 24 00:46:54.618927 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:46:54.640343 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:46:54.647349 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:46:54.649602 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:46:54.649777 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:46:54.652825 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:46:54.652966 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:46:54.668998 ignition[1144]: INFO : Ignition 2.19.0 Jan 24 00:46:54.668998 ignition[1144]: INFO : Stage: umount Jan 24 00:46:54.668998 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:54.668998 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:54.668998 ignition[1144]: INFO : umount: umount passed Jan 24 00:46:54.668998 ignition[1144]: INFO : Ignition finished successfully Jan 24 00:46:54.667724 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:46:54.667809 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:46:54.673246 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:46:54.673516 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:46:54.695848 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:46:54.695918 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:46:54.702986 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:46:54.703047 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:46:54.709865 systemd[1]: Stopped target network.target - Network. Jan 24 00:46:54.712011 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:46:54.712072 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:46:54.719630 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:46:54.724440 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:46:54.729004 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:46:54.735685 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:46:54.737817 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:46:54.742376 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:46:54.742434 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:46:54.745107 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:46:54.745165 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:46:54.748119 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:46:54.748184 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:46:54.754480 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:46:54.756705 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:46:54.766918 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:46:54.777774 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:46:54.781538 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:46:54.782369 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:46:54.782456 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:46:54.786234 systemd-networkd[902]: eth0: DHCPv6 lease lost Jan 24 00:46:54.789011 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:46:54.789117 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:46:54.793715 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:46:54.793790 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:46:54.814354 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:46:54.816526 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:46:54.816592 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:46:54.821881 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:46:54.833845 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:46:54.833969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:46:54.848503 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:46:54.848665 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:46:54.852578 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:46:54.852648 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:46:54.856569 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:46:54.856610 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:46:54.867133 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:46:54.867198 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:46:54.869877 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:46:54.869926 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:46:54.879704 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:46:54.886643 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:46:54.899339 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:46:54.904743 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:46:54.913596 kernel: hv_netvsc 7ced8d9c-f6de-7ced-8d9c-f6de7ced8d9c eth0: Data path switched from VF: enP13338s1 Jan 24 00:46:54.904804 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:46:54.912801 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:46:54.912862 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:46:54.915453 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:46:54.915498 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:46:54.915598 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:46:54.915637 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:46:54.940120 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:46:54.940191 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:46:54.945616 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:46:54.948390 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:46:54.951312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:46:54.954027 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:46:54.960440 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:46:54.960548 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:46:54.967463 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:46:54.968442 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:46:55.275590 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:46:55.275722 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:46:55.280806 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:46:55.285138 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:46:55.285211 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:46:55.298325 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:46:55.339337 systemd[1]: Switching root. Jan 24 00:46:55.415735 systemd-journald[177]: Journal stopped Jan 24 00:46:46.053207 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:46:46.053235 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:46:46.053247 kernel: BIOS-provided physical RAM map: Jan 24 00:46:46.053253 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:46:46.053259 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 24 00:46:46.053268 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 24 00:46:46.053275 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 24 00:46:46.053286 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 24 00:46:46.053294 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 24 00:46:46.053303 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 24 00:46:46.053311 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 24 00:46:46.053319 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 24 00:46:46.053328 kernel: printk: bootconsole [earlyser0] enabled Jan 24 00:46:46.053335 kernel: NX (Execute Disable) protection: active Jan 24 00:46:46.053348 kernel: APIC: Static calls initialized Jan 24 00:46:46.053357 kernel: efi: EFI v2.7 by Microsoft Jan 24 00:46:46.053364 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Jan 24 00:46:46.055418 kernel: SMBIOS 3.1.0 present. Jan 24 00:46:46.055437 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 24 00:46:46.055451 kernel: Hypervisor detected: Microsoft Hyper-V Jan 24 00:46:46.055464 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 24 00:46:46.055478 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jan 24 00:46:46.055490 kernel: Hyper-V: Nested features: 0x1e0101 Jan 24 00:46:46.055501 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 24 00:46:46.055519 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 24 00:46:46.055532 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:46:46.055544 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:46:46.055558 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 24 00:46:46.055571 kernel: tsc: Detected 2593.909 MHz processor Jan 24 00:46:46.055585 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:46:46.055599 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:46:46.055611 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 24 00:46:46.055624 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:46:46.055640 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:46:46.055654 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 24 00:46:46.055666 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 24 00:46:46.055678 kernel: Using GB pages for direct mapping Jan 24 00:46:46.055693 kernel: Secure boot disabled Jan 24 00:46:46.055705 kernel: ACPI: Early table checksum verification disabled Jan 24 00:46:46.055718 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 24 00:46:46.055738 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055755 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055770 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 24 00:46:46.055784 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 24 00:46:46.055799 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055814 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055829 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055846 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055861 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055875 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055890 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:46:46.055905 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 24 00:46:46.055920 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 24 00:46:46.055934 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 24 00:46:46.055949 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 24 00:46:46.055967 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 24 00:46:46.055982 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 24 00:46:46.055996 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 24 00:46:46.056009 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 24 00:46:46.056022 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 24 00:46:46.056036 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 24 00:46:46.056049 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:46:46.056061 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:46:46.056072 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 24 00:46:46.056085 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 24 00:46:46.056097 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 24 00:46:46.056106 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 24 00:46:46.056116 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 24 00:46:46.056126 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 24 00:46:46.056137 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 24 00:46:46.056145 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 24 00:46:46.056154 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 24 00:46:46.056163 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 24 00:46:46.056175 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 24 00:46:46.056185 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 24 00:46:46.056195 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 24 00:46:46.056205 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 24 00:46:46.056216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 24 00:46:46.056225 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 24 00:46:46.056238 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 24 00:46:46.056249 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 24 00:46:46.056262 kernel: Zone ranges: Jan 24 00:46:46.056278 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:46:46.056291 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:46:46.056302 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:46:46.056313 kernel: Movable zone start for each node Jan 24 00:46:46.056326 kernel: Early memory node ranges Jan 24 00:46:46.056340 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:46:46.056352 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 24 00:46:46.056365 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 24 00:46:46.056400 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:46:46.056418 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 24 00:46:46.056432 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:46:46.056444 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:46:46.056458 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 24 00:46:46.056472 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 24 00:46:46.056486 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 24 00:46:46.056500 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:46:46.056514 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:46:46.056528 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:46:46.056546 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 24 00:46:46.056560 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:46:46.056574 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 24 00:46:46.056588 kernel: Booting paravirtualized kernel on Hyper-V Jan 24 00:46:46.056602 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:46:46.056617 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:46:46.056631 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:46:46.056645 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:46:46.056659 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:46:46.056676 kernel: Hyper-V: PV spinlocks enabled Jan 24 00:46:46.056690 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:46:46.056706 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:46:46.056720 kernel: random: crng init done Jan 24 00:46:46.056734 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 24 00:46:46.056748 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:46:46.056762 kernel: Fallback order for Node 0: 0 Jan 24 00:46:46.056777 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 24 00:46:46.056794 kernel: Policy zone: Normal Jan 24 00:46:46.056819 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:46:46.056834 kernel: software IO TLB: area num 2. Jan 24 00:46:46.056852 kernel: Memory: 8077084K/8387460K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 310116K reserved, 0K cma-reserved) Jan 24 00:46:46.056868 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:46:46.056883 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:46:46.056898 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:46:46.056913 kernel: Dynamic Preempt: voluntary Jan 24 00:46:46.056928 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:46:46.056944 kernel: rcu: RCU event tracing is enabled. Jan 24 00:46:46.056962 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:46:46.056978 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:46:46.056993 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:46:46.057008 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:46:46.057024 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:46:46.057039 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:46:46.057058 kernel: Using NULL legacy PIC Jan 24 00:46:46.057073 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 24 00:46:46.057088 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:46:46.057104 kernel: Console: colour dummy device 80x25 Jan 24 00:46:46.057119 kernel: printk: console [tty1] enabled Jan 24 00:46:46.057135 kernel: printk: console [ttyS0] enabled Jan 24 00:46:46.057150 kernel: printk: bootconsole [earlyser0] disabled Jan 24 00:46:46.057165 kernel: ACPI: Core revision 20230628 Jan 24 00:46:46.057180 kernel: Failed to register legacy timer interrupt Jan 24 00:46:46.057195 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:46:46.057213 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 24 00:46:46.057228 kernel: Hyper-V: Using IPI hypercalls Jan 24 00:46:46.057243 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 24 00:46:46.057258 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 24 00:46:46.057273 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 24 00:46:46.057289 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 24 00:46:46.057304 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 24 00:46:46.057319 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 24 00:46:46.057334 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593909) Jan 24 00:46:46.057353 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:46:46.057368 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:46:46.057395 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:46:46.057411 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:46:46.057425 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:46:46.057441 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:46:46.057455 kernel: RETBleed: Vulnerable Jan 24 00:46:46.057470 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:46:46.057486 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:46:46.057501 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:46:46.057519 kernel: active return thunk: its_return_thunk Jan 24 00:46:46.057535 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:46:46.057550 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:46:46.057565 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:46:46.057581 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:46:46.057596 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:46:46.057611 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:46:46.057625 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:46:46.057640 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:46:46.057656 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:46:46.057671 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:46:46.057688 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:46:46.057703 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 24 00:46:46.057718 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:46:46.057733 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:46:46.057749 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:46:46.057764 kernel: landlock: Up and running. Jan 24 00:46:46.057779 kernel: SELinux: Initializing. Jan 24 00:46:46.057794 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:46:46.057809 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:46:46.057824 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:46:46.057840 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:46:46.057858 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:46:46.057873 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:46:46.057889 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:46:46.057904 kernel: signal: max sigframe size: 3632 Jan 24 00:46:46.057920 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:46:46.057936 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:46:46.057951 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:46:46.057966 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:46:46.057982 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:46:46.058000 kernel: .... node #0, CPUs: #1 Jan 24 00:46:46.058016 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 24 00:46:46.058032 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:46:46.058047 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:46:46.058062 kernel: smpboot: Max logical packages: 1 Jan 24 00:46:46.058077 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Jan 24 00:46:46.058093 kernel: devtmpfs: initialized Jan 24 00:46:46.058108 kernel: x86/mm: Memory block size: 128MB Jan 24 00:46:46.058126 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 24 00:46:46.058142 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:46:46.058158 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:46:46.058173 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:46:46.058188 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:46:46.058204 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:46:46.058219 kernel: audit: type=2000 audit(1769215604.029:1): state=initialized audit_enabled=0 res=1 Jan 24 00:46:46.058234 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:46:46.058249 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:46:46.058267 kernel: cpuidle: using governor menu Jan 24 00:46:46.058283 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:46:46.058297 kernel: dca service started, version 1.12.1 Jan 24 00:46:46.058313 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 24 00:46:46.058328 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:46:46.058344 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:46:46.058358 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:46:46.058391 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:46:46.058406 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:46:46.058425 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:46:46.058440 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:46:46.058456 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:46:46.058471 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:46:46.058485 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:46:46.058500 kernel: ACPI: Interpreter enabled Jan 24 00:46:46.058516 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:46:46.058531 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:46:46.058546 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:46:46.058564 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 24 00:46:46.058580 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 24 00:46:46.058595 kernel: iommu: Default domain type: Translated Jan 24 00:46:46.058610 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:46:46.058626 kernel: efivars: Registered efivars operations Jan 24 00:46:46.058641 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:46:46.058657 kernel: PCI: System does not support PCI Jan 24 00:46:46.058672 kernel: vgaarb: loaded Jan 24 00:46:46.058687 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 24 00:46:46.058705 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:46:46.058720 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:46:46.058735 kernel: pnp: PnP ACPI init Jan 24 00:46:46.058750 kernel: pnp: PnP ACPI: found 3 devices Jan 24 00:46:46.058765 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:46:46.058780 kernel: NET: Registered PF_INET protocol family Jan 24 00:46:46.058796 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:46:46.058811 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 24 00:46:46.058827 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:46:46.058844 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:46:46.058859 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 24 00:46:46.058875 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 24 00:46:46.058890 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:46:46.058905 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:46:46.058920 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:46:46.058934 kernel: NET: Registered PF_XDP protocol family Jan 24 00:46:46.058947 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:46:46.058961 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:46:46.058979 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Jan 24 00:46:46.058990 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:46:46.059006 kernel: Initialise system trusted keyrings Jan 24 00:46:46.059023 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 24 00:46:46.059041 kernel: Key type asymmetric registered Jan 24 00:46:46.059057 kernel: Asymmetric key parser 'x509' registered Jan 24 00:46:46.059075 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:46:46.059087 kernel: io scheduler mq-deadline registered Jan 24 00:46:46.059101 kernel: io scheduler kyber registered Jan 24 00:46:46.059118 kernel: io scheduler bfq registered Jan 24 00:46:46.059132 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:46:46.059146 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:46:46.059161 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:46:46.059175 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 24 00:46:46.059189 kernel: i8042: PNP: No PS/2 controller found. Jan 24 00:46:46.059400 kernel: rtc_cmos 00:02: registered as rtc0 Jan 24 00:46:46.059534 kernel: rtc_cmos 00:02: setting system clock to 2026-01-24T00:46:45 UTC (1769215605) Jan 24 00:46:46.059653 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 24 00:46:46.059671 kernel: intel_pstate: CPU model not supported Jan 24 00:46:46.059685 kernel: efifb: probing for efifb Jan 24 00:46:46.059699 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 24 00:46:46.059713 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 24 00:46:46.059728 kernel: efifb: scrolling: redraw Jan 24 00:46:46.059742 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:46:46.059756 kernel: Console: switching to colour frame buffer device 128x48 Jan 24 00:46:46.059770 kernel: fb0: EFI VGA frame buffer device Jan 24 00:46:46.059787 kernel: pstore: Using crash dump compression: deflate Jan 24 00:46:46.059801 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:46:46.059815 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:46:46.059829 kernel: Segment Routing with IPv6 Jan 24 00:46:46.059843 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:46:46.059857 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:46:46.059871 kernel: Key type dns_resolver registered Jan 24 00:46:46.059885 kernel: IPI shorthand broadcast: enabled Jan 24 00:46:46.059899 kernel: sched_clock: Marking stable (874003000, 55480500)->(1206959300, -277475800) Jan 24 00:46:46.059915 kernel: registered taskstats version 1 Jan 24 00:46:46.059929 kernel: Loading compiled-in X.509 certificates Jan 24 00:46:46.059943 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:46:46.059957 kernel: Key type .fscrypt registered Jan 24 00:46:46.059971 kernel: Key type fscrypt-provisioning registered Jan 24 00:46:46.059985 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:46:46.059999 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:46:46.060013 kernel: ima: No architecture policies found Jan 24 00:46:46.060027 kernel: clk: Disabling unused clocks Jan 24 00:46:46.060045 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:46:46.060059 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:46:46.060073 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:46:46.060087 kernel: Run /init as init process Jan 24 00:46:46.060101 kernel: with arguments: Jan 24 00:46:46.060114 kernel: /init Jan 24 00:46:46.060128 kernel: with environment: Jan 24 00:46:46.060141 kernel: HOME=/ Jan 24 00:46:46.060155 kernel: TERM=linux Jan 24 00:46:46.060174 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:46:46.060191 systemd[1]: Detected virtualization microsoft. Jan 24 00:46:46.060206 systemd[1]: Detected architecture x86-64. Jan 24 00:46:46.060220 systemd[1]: Running in initrd. Jan 24 00:46:46.060238 systemd[1]: No hostname configured, using default hostname. Jan 24 00:46:46.060253 systemd[1]: Hostname set to . Jan 24 00:46:46.060268 systemd[1]: Initializing machine ID from random generator. Jan 24 00:46:46.060286 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:46:46.060300 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:46:46.060315 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:46:46.060331 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:46:46.060346 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:46:46.060361 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:46:46.060387 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:46:46.060408 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:46:46.060423 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:46:46.060438 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:46:46.060453 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:46:46.060468 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:46:46.060483 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:46:46.060498 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:46:46.060513 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:46:46.060531 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:46:46.060546 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:46:46.060561 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:46:46.060576 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:46:46.060591 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:46:46.060606 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:46:46.060621 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:46:46.060635 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:46:46.060650 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:46:46.060668 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:46:46.060682 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:46:46.060697 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:46:46.060712 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:46:46.060727 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:46:46.060762 systemd-journald[177]: Collecting audit messages is disabled. Jan 24 00:46:46.060797 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:46:46.060812 systemd-journald[177]: Journal started Jan 24 00:46:46.060842 systemd-journald[177]: Runtime Journal (/run/log/journal/615af8d8c88d487b947e1af634ca4322) is 8.0M, max 158.8M, 150.8M free. Jan 24 00:46:46.065640 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:46:46.074434 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:46:46.072564 systemd-modules-load[178]: Inserted module 'overlay' Jan 24 00:46:46.075131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:46:46.082174 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:46:46.095605 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:46:46.110556 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:46:46.120407 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:46:46.133491 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:46:46.133517 kernel: Bridge firewalling registered Jan 24 00:46:46.130084 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:46:46.130442 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 24 00:46:46.148565 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:46:46.154225 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:46:46.159650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:46:46.165779 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:46:46.177034 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:46:46.189546 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:46:46.199589 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:46:46.205430 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:46:46.211176 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:46:46.217421 dracut-cmdline[209]: dracut-dracut-053 Jan 24 00:46:46.219614 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:46:46.235626 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:46:46.276337 systemd-resolved[224]: Positive Trust Anchors: Jan 24 00:46:46.276352 systemd-resolved[224]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:46:46.278985 systemd-resolved[224]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:46:46.302945 systemd-resolved[224]: Defaulting to hostname 'linux'. Jan 24 00:46:46.306464 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:46:46.312043 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:46:46.323393 kernel: SCSI subsystem initialized Jan 24 00:46:46.333389 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:46:46.344390 kernel: iscsi: registered transport (tcp) Jan 24 00:46:46.365027 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:46:46.365094 kernel: QLogic iSCSI HBA Driver Jan 24 00:46:46.401249 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:46:46.411522 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:46:46.438228 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:46:46.438298 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:46:46.441721 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:46:46.481396 kernel: raid6: avx512x4 gen() 18291 MB/s Jan 24 00:46:46.500386 kernel: raid6: avx512x2 gen() 18372 MB/s Jan 24 00:46:46.518384 kernel: raid6: avx512x1 gen() 18329 MB/s Jan 24 00:46:46.537382 kernel: raid6: avx2x4 gen() 18333 MB/s Jan 24 00:46:46.556387 kernel: raid6: avx2x2 gen() 18333 MB/s Jan 24 00:46:46.575927 kernel: raid6: avx2x1 gen() 13543 MB/s Jan 24 00:46:46.575967 kernel: raid6: using algorithm avx512x2 gen() 18372 MB/s Jan 24 00:46:46.597017 kernel: raid6: .... xor() 30447 MB/s, rmw enabled Jan 24 00:46:46.597050 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:46:46.620396 kernel: xor: automatically using best checksumming function avx Jan 24 00:46:46.766398 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:46:46.776397 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:46:46.786538 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:46:46.799532 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 24 00:46:46.803980 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:46:46.817546 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:46:46.831098 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Jan 24 00:46:46.856599 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:46:46.865522 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:46:46.907045 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:46:46.921549 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:46:46.939531 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:46:46.948221 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:46:46.954625 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:46:46.960488 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:46:46.970345 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:46:46.992395 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:46:47.003119 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:46:47.027985 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:46:47.028033 kernel: AES CTR mode by8 optimization enabled Jan 24 00:46:47.029621 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:46:47.033256 kernel: hv_vmbus: Vmbus version:5.2 Jan 24 00:46:47.033284 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:46:47.041214 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:46:47.044003 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:46:47.049613 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:46:46.985993 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 24 00:46:46.993942 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 24 00:46:46.993971 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 24 00:46:46.993988 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 24 00:46:46.994005 kernel: PTP clock support registered Jan 24 00:46:46.994021 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:46:46.994037 kernel: hv_utils: Registering HyperV Utility Driver Jan 24 00:46:46.994058 kernel: hv_vmbus: registering driver hv_utils Jan 24 00:46:46.994075 kernel: hv_utils: Heartbeat IC version 3.0 Jan 24 00:46:46.994094 kernel: hv_utils: TimeSync IC version 4.0 Jan 24 00:46:46.994111 kernel: hv_utils: Shutdown IC version 3.2 Jan 24 00:46:46.994129 systemd-journald[177]: Time jumped backwards, rotating. Jan 24 00:46:46.994201 kernel: hv_vmbus: registering driver hv_netvsc Jan 24 00:46:47.056970 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:46:47.002063 kernel: hv_vmbus: registering driver hid_hyperv Jan 24 00:46:47.002079 kernel: hv_vmbus: registering driver hv_storvsc Jan 24 00:46:46.971383 systemd-resolved[224]: Clock change detected. Flushing caches. Jan 24 00:46:47.001098 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:46:47.014189 kernel: scsi host1: storvsc_host_t Jan 24 00:46:47.021621 kernel: scsi host0: storvsc_host_t Jan 24 00:46:47.021681 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 24 00:46:47.027177 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 24 00:46:47.027221 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 24 00:46:47.032012 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:46:47.032432 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:46:47.044387 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 24 00:46:47.049408 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:46:47.073168 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:46:47.083966 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 24 00:46:47.084202 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:46:47.084222 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 24 00:46:47.087325 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:46:47.110783 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 24 00:46:47.111053 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 24 00:46:47.118256 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:46:47.118477 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 24 00:46:47.118646 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 24 00:46:47.126165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:46:47.136963 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:46:47.144971 kernel: hv_netvsc 7ced8d9c-f6de-7ced-8d9c-f6de7ced8d9c eth0: VF slot 1 added Jan 24 00:46:47.145186 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:46:47.161819 kernel: hv_vmbus: registering driver hv_pci Jan 24 00:46:47.161912 kernel: hv_pci a747f1c2-341a-4a29-a0a8-57577dad7fe1: PCI VMBus probing: Using version 0x10004 Jan 24 00:46:47.171159 kernel: hv_pci a747f1c2-341a-4a29-a0a8-57577dad7fe1: PCI host bridge to bus 341a:00 Jan 24 00:46:47.171310 kernel: pci_bus 341a:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 24 00:46:47.171439 kernel: pci_bus 341a:00: No busn resource found for root bus, will use [bus 00-ff] Jan 24 00:46:47.176273 kernel: pci 341a:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 24 00:46:47.183248 kernel: pci 341a:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:46:47.187242 kernel: pci 341a:00:02.0: enabling Extended Tags Jan 24 00:46:47.196160 kernel: pci 341a:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 341a:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 24 00:46:47.202165 kernel: pci_bus 341a:00: busn_res: [bus 00-ff] end is updated to 00 Jan 24 00:46:47.202421 kernel: pci 341a:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:46:47.367995 kernel: mlx5_core 341a:00:02.0: enabling device (0000 -> 0002) Jan 24 00:46:47.372173 kernel: mlx5_core 341a:00:02.0: firmware version: 14.30.5026 Jan 24 00:46:47.584486 kernel: hv_netvsc 7ced8d9c-f6de-7ced-8d9c-f6de7ced8d9c eth0: VF registering: eth1 Jan 24 00:46:47.584814 kernel: mlx5_core 341a:00:02.0 eth1: joined to eth0 Jan 24 00:46:47.588205 kernel: mlx5_core 341a:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 24 00:46:47.597178 kernel: mlx5_core 341a:00:02.0 enP13338s1: renamed from eth1 Jan 24 00:46:47.690803 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 24 00:46:47.709170 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (465) Jan 24 00:46:47.724456 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 24 00:46:47.760174 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (444) Jan 24 00:46:47.775535 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 24 00:46:47.781697 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 24 00:46:47.794106 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 24 00:46:47.804294 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:46:47.818165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:46:47.827160 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:46:47.834165 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:46:48.039211 (udev-worker)[449]: sda9: Failed to create/update device symlink '/dev/disk/by-partlabel/ROOT', ignoring: No such file or directory Jan 24 00:46:48.837987 disk-uuid[605]: The operation has completed successfully. Jan 24 00:46:48.843447 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:46:48.925629 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:46:48.925740 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:46:48.945302 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:46:48.951699 sh[718]: Success Jan 24 00:46:48.989520 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:46:49.302688 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:46:49.319275 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:46:49.324471 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:46:49.359163 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:46:49.359210 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:46:49.364035 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:46:49.366783 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:46:49.369334 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:46:49.750864 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:46:49.753289 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:46:49.762375 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:46:49.771055 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:46:49.784678 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:46:49.784733 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:46:49.786699 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:46:49.825168 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:46:49.837924 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:46:49.845166 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:46:49.854561 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:46:49.867622 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:46:49.881165 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:46:49.892287 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:46:49.913169 systemd-networkd[902]: lo: Link UP Jan 24 00:46:49.913177 systemd-networkd[902]: lo: Gained carrier Jan 24 00:46:49.915379 systemd-networkd[902]: Enumeration completed Jan 24 00:46:49.915609 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:46:49.917862 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:46:49.917867 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:46:49.919768 systemd[1]: Reached target network.target - Network. Jan 24 00:46:49.980172 kernel: mlx5_core 341a:00:02.0 enP13338s1: Link up Jan 24 00:46:50.010171 kernel: hv_netvsc 7ced8d9c-f6de-7ced-8d9c-f6de7ced8d9c eth0: Data path switched to VF: enP13338s1 Jan 24 00:46:50.010935 systemd-networkd[902]: enP13338s1: Link UP Jan 24 00:46:50.011110 systemd-networkd[902]: eth0: Link UP Jan 24 00:46:50.011363 systemd-networkd[902]: eth0: Gained carrier Jan 24 00:46:50.011381 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:46:50.024379 systemd-networkd[902]: enP13338s1: Gained carrier Jan 24 00:46:50.087206 systemd-networkd[902]: eth0: DHCPv4 address 10.200.4.29/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:46:50.870895 ignition[886]: Ignition 2.19.0 Jan 24 00:46:50.870906 ignition[886]: Stage: fetch-offline Jan 24 00:46:50.870947 ignition[886]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:50.870958 ignition[886]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:50.871072 ignition[886]: parsed url from cmdline: "" Jan 24 00:46:50.871077 ignition[886]: no config URL provided Jan 24 00:46:50.871083 ignition[886]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:46:50.871095 ignition[886]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:46:50.871102 ignition[886]: failed to fetch config: resource requires networking Jan 24 00:46:50.878940 ignition[886]: Ignition finished successfully Jan 24 00:46:50.897952 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:46:50.908392 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:46:50.924995 ignition[912]: Ignition 2.19.0 Jan 24 00:46:50.925041 ignition[912]: Stage: fetch Jan 24 00:46:50.928294 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:50.928313 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:50.933261 ignition[912]: parsed url from cmdline: "" Jan 24 00:46:50.933344 ignition[912]: no config URL provided Jan 24 00:46:50.933359 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:46:50.933382 ignition[912]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:46:50.933414 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 24 00:46:51.033834 ignition[912]: GET result: OK Jan 24 00:46:51.033943 ignition[912]: config has been read from IMDS userdata Jan 24 00:46:51.033986 ignition[912]: parsing config with SHA512: 7ffbe8ee71d93018ed69e38147a0849b46d683232f061f5a3b88dbf9f1d869f633a07b24602365905bc78ef9414bb014eaa57c583162b14359df9b9a1621ddd7 Jan 24 00:46:51.039690 unknown[912]: fetched base config from "system" Jan 24 00:46:51.039994 ignition[912]: fetch: fetch complete Jan 24 00:46:51.039696 unknown[912]: fetched base config from "system" Jan 24 00:46:51.039998 ignition[912]: fetch: fetch passed Jan 24 00:46:51.039702 unknown[912]: fetched user config from "azure" Jan 24 00:46:51.040034 ignition[912]: Ignition finished successfully Jan 24 00:46:51.041967 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:46:51.054340 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:46:51.077642 ignition[919]: Ignition 2.19.0 Jan 24 00:46:51.077652 ignition[919]: Stage: kargs Jan 24 00:46:51.080674 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:46:51.077890 ignition[919]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:51.077905 ignition[919]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:51.079192 ignition[919]: kargs: kargs passed Jan 24 00:46:51.079239 ignition[919]: Ignition finished successfully Jan 24 00:46:51.093354 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:46:51.109643 ignition[925]: Ignition 2.19.0 Jan 24 00:46:51.109653 ignition[925]: Stage: disks Jan 24 00:46:51.112682 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:46:51.109874 ignition[925]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:51.118785 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:46:51.109888 ignition[925]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:51.121691 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:46:51.111189 ignition[925]: disks: disks passed Jan 24 00:46:51.126797 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:46:51.111235 ignition[925]: Ignition finished successfully Jan 24 00:46:51.129263 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:46:51.146820 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:46:51.156301 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:46:51.222024 systemd-fsck[933]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 24 00:46:51.227414 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:46:51.245262 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:46:51.315286 systemd-networkd[902]: eth0: Gained IPv6LL Jan 24 00:46:51.336406 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:46:51.336977 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:46:51.339651 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:46:51.377285 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:46:51.394609 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (944) Jan 24 00:46:51.394659 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:46:51.396158 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:46:51.400020 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:46:51.407165 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:46:51.412243 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:46:51.417726 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 24 00:46:51.423945 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:46:51.423984 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:46:51.436220 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:46:51.438642 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:46:51.448328 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:46:52.213488 coreos-metadata[961]: Jan 24 00:46:52.213 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 24 00:46:52.218974 coreos-metadata[961]: Jan 24 00:46:52.218 INFO Fetch successful Jan 24 00:46:52.221681 coreos-metadata[961]: Jan 24 00:46:52.218 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 24 00:46:52.229685 coreos-metadata[961]: Jan 24 00:46:52.229 INFO Fetch successful Jan 24 00:46:52.235600 coreos-metadata[961]: Jan 24 00:46:52.229 INFO wrote hostname ci-4081.3.6-n-f1b70866be to /sysroot/etc/hostname Jan 24 00:46:52.231586 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:46:52.274577 initrd-setup-root[974]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:46:52.360827 initrd-setup-root[981]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:46:52.366194 initrd-setup-root[988]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:46:52.371114 initrd-setup-root[995]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:46:53.300797 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:46:53.317231 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:46:53.330638 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:46:53.336574 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:46:53.337295 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:46:53.408867 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:46:53.414045 ignition[1063]: INFO : Ignition 2.19.0 Jan 24 00:46:53.414045 ignition[1063]: INFO : Stage: mount Jan 24 00:46:53.417789 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:53.417789 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:53.423869 ignition[1063]: INFO : mount: mount passed Jan 24 00:46:53.425867 ignition[1063]: INFO : Ignition finished successfully Jan 24 00:46:53.426423 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:46:53.437278 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:46:53.445811 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:46:53.465162 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1075) Jan 24 00:46:53.465196 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:46:53.468162 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:46:53.472325 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:46:53.478163 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:46:53.479940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:46:53.506917 ignition[1092]: INFO : Ignition 2.19.0 Jan 24 00:46:53.506917 ignition[1092]: INFO : Stage: files Jan 24 00:46:53.510842 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:53.510842 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:53.510842 ignition[1092]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:46:53.520560 ignition[1092]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:46:53.520560 ignition[1092]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:46:53.622940 ignition[1092]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:46:53.626623 ignition[1092]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:46:53.626623 ignition[1092]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:46:53.623520 unknown[1092]: wrote ssh authorized keys file for user: core Jan 24 00:46:53.636791 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:46:53.636791 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:46:53.699935 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:46:53.766088 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:46:53.772534 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Jan 24 00:46:54.058469 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:46:54.254192 ignition[1092]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Jan 24 00:46:54.254192 ignition[1092]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:46:54.295548 ignition[1092]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:46:54.302042 ignition[1092]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:46:54.302042 ignition[1092]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:46:54.310044 ignition[1092]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:46:54.310044 ignition[1092]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:46:54.317241 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:46:54.321383 ignition[1092]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:46:54.325554 ignition[1092]: INFO : files: files passed Jan 24 00:46:54.325554 ignition[1092]: INFO : Ignition finished successfully Jan 24 00:46:54.328527 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:46:54.346300 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:46:54.352475 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:46:54.368010 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:46:54.368010 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:46:54.375795 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:46:54.372306 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:46:54.372408 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:46:54.387082 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:46:54.390966 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:46:54.402367 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:46:54.426220 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:46:54.426327 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:46:54.432066 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:46:54.437554 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:46:54.440323 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:46:54.458304 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:46:54.471238 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:46:54.486342 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:46:54.498819 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:46:54.504382 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:46:54.507366 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:46:54.512813 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:46:54.512964 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:46:54.523424 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:46:54.528593 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:46:54.531017 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:46:54.535604 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:46:54.538454 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:46:54.546581 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:46:54.549346 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:46:54.560440 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:46:54.565362 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:46:54.570320 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:46:54.572345 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:46:54.572463 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:46:54.576970 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:46:54.581935 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:46:54.587355 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:46:54.595178 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:46:54.601542 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:46:54.601711 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:46:54.608799 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:46:54.608941 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:46:54.613598 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:46:54.613748 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:46:54.618767 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 24 00:46:54.618927 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:46:54.640343 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:46:54.647349 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:46:54.649602 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:46:54.649777 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:46:54.652825 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:46:54.652966 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:46:54.668998 ignition[1144]: INFO : Ignition 2.19.0 Jan 24 00:46:54.668998 ignition[1144]: INFO : Stage: umount Jan 24 00:46:54.668998 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:46:54.668998 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:46:54.668998 ignition[1144]: INFO : umount: umount passed Jan 24 00:46:54.668998 ignition[1144]: INFO : Ignition finished successfully Jan 24 00:46:54.667724 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:46:54.667809 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:46:54.673246 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:46:54.673516 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:46:54.695848 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:46:54.695918 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:46:54.702986 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:46:54.703047 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:46:54.709865 systemd[1]: Stopped target network.target - Network. Jan 24 00:46:54.712011 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:46:54.712072 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:46:54.719630 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:46:54.724440 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:46:54.729004 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:46:54.735685 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:46:54.737817 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:46:54.742376 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:46:54.742434 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:46:54.745107 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:46:54.745165 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:46:54.748119 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:46:54.748184 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:46:54.754480 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:46:54.756705 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:46:54.766918 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:46:54.777774 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:46:54.781538 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:46:54.782369 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:46:54.782456 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:46:54.786234 systemd-networkd[902]: eth0: DHCPv6 lease lost Jan 24 00:46:54.789011 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:46:54.789117 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:46:54.793715 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:46:54.793790 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:46:54.814354 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:46:54.816526 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:46:54.816592 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:46:54.821881 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:46:54.833845 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:46:54.833969 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:46:54.848503 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:46:54.848665 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:46:54.852578 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:46:54.852648 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:46:54.856569 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:46:54.856610 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:46:54.867133 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:46:54.867198 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:46:54.869877 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:46:54.869926 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:46:54.879704 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:46:54.886643 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:46:54.899339 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:46:54.904743 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:46:54.913596 kernel: hv_netvsc 7ced8d9c-f6de-7ced-8d9c-f6de7ced8d9c eth0: Data path switched from VF: enP13338s1 Jan 24 00:46:54.904804 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:46:54.912801 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:46:54.912862 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:46:54.915453 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:46:54.915498 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:46:54.915598 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:46:54.915637 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:46:54.940120 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:46:54.940191 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:46:54.945616 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:46:54.948390 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:46:54.951312 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:46:54.954027 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:46:54.960440 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:46:54.960548 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:46:54.967463 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:46:54.968442 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:46:55.275590 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:46:55.275722 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:46:55.280806 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:46:55.285138 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:46:55.285211 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:46:55.298325 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:46:55.339337 systemd[1]: Switching root. Jan 24 00:46:55.415735 systemd-journald[177]: Journal stopped Jan 24 00:47:01.636579 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 24 00:47:01.636628 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:47:01.636649 kernel: SELinux: policy capability open_perms=1 Jan 24 00:47:01.636664 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:47:01.636678 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:47:01.636692 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:47:01.636708 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:47:01.636728 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:47:01.636742 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:47:01.636757 kernel: audit: type=1403 audit(1769215617.261:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:47:01.636776 systemd[1]: Successfully loaded SELinux policy in 163.214ms. Jan 24 00:47:01.636793 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.600ms. Jan 24 00:47:01.636812 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:47:01.636830 systemd[1]: Detected virtualization microsoft. Jan 24 00:47:01.636851 systemd[1]: Detected architecture x86-64. Jan 24 00:47:01.636869 systemd[1]: Detected first boot. Jan 24 00:47:01.636887 systemd[1]: Hostname set to . Jan 24 00:47:01.636903 systemd[1]: Initializing machine ID from random generator. Jan 24 00:47:01.636921 zram_generator::config[1186]: No configuration found. Jan 24 00:47:01.636944 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:47:01.636961 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:47:01.636979 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:47:01.636996 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:47:01.637015 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:47:01.637033 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:47:01.637051 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:47:01.637072 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:47:01.637091 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:47:01.637108 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:47:01.637126 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:47:01.637161 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:47:01.637180 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:47:01.637196 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:47:01.637211 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:47:01.637232 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:47:01.637248 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:47:01.637264 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:47:01.637280 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:47:01.637298 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:47:01.637315 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:47:01.637337 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:47:01.637354 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:47:01.637374 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:47:01.637390 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:47:01.637410 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:47:01.637428 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:47:01.637445 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:47:01.637463 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:47:01.637479 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:47:01.637499 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:47:01.637516 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:47:01.637536 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:47:01.637553 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:47:01.637572 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:47:01.637593 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:47:01.637612 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:47:01.637630 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:01.637649 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:47:01.637667 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:47:01.637685 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:47:01.637704 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:47:01.637721 systemd[1]: Reached target machines.target - Containers. Jan 24 00:47:01.637743 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:47:01.637761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:47:01.637780 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:47:01.637800 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:47:01.637818 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:47:01.637836 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:47:01.637854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:47:01.637872 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:47:01.637891 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:47:01.637913 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:47:01.637931 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:47:01.637949 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:47:01.637968 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:47:01.637986 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:47:01.638005 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:47:01.638022 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:47:01.638040 kernel: fuse: init (API version 7.39) Jan 24 00:47:01.638061 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:47:01.638105 systemd-journald[1271]: Collecting audit messages is disabled. Jan 24 00:47:01.638161 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:47:01.638180 systemd-journald[1271]: Journal started Jan 24 00:47:01.638217 systemd-journald[1271]: Runtime Journal (/run/log/journal/1a79b909235e4d6f8b25339183738ab5) is 8.0M, max 158.8M, 150.8M free. Jan 24 00:47:00.908509 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:47:01.061629 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:47:01.062007 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:47:01.647566 kernel: loop: module loaded Jan 24 00:47:01.660182 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:47:01.671162 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:47:01.676169 systemd[1]: Stopped verity-setup.service. Jan 24 00:47:01.687764 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:01.693601 kernel: ACPI: bus type drm_connector registered Jan 24 00:47:01.693663 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:47:01.696832 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:47:01.699540 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:47:01.702292 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:47:01.704821 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:47:01.707559 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:47:01.710476 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:47:01.713036 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:47:01.716332 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:47:01.719540 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:47:01.719695 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:47:01.722725 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:47:01.722881 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:47:01.725864 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:47:01.725990 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:47:01.728674 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:47:01.728831 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:47:01.732104 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:47:01.732427 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:47:01.735381 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:47:01.735533 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:47:01.738445 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:47:01.750704 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:47:01.759268 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:47:01.763343 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:47:01.766483 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:47:01.769297 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:47:01.775117 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:47:01.782392 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:47:01.785731 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:47:01.788945 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:47:01.794838 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:47:01.794916 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:47:01.803356 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:47:01.813301 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:47:01.822536 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:47:01.825825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:47:01.831348 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:47:01.840294 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:47:01.845675 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:47:01.849281 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:47:01.858328 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:47:01.870338 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:47:01.877337 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:47:01.880637 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:47:01.893103 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:47:01.896616 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:47:01.905262 systemd-journald[1271]: Time spent on flushing to /var/log/journal/1a79b909235e4d6f8b25339183738ab5 is 142.871ms for 963 entries. Jan 24 00:47:01.905262 systemd-journald[1271]: System Journal (/var/log/journal/1a79b909235e4d6f8b25339183738ab5) is 11.8M, max 2.6G, 2.6G free. Jan 24 00:47:02.182941 systemd-journald[1271]: Received client request to flush runtime journal. Jan 24 00:47:02.183029 kernel: loop0: detected capacity change from 0 to 140768 Jan 24 00:47:02.183059 systemd-journald[1271]: /var/log/journal/1a79b909235e4d6f8b25339183738ab5/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 24 00:47:02.183102 systemd-journald[1271]: Rotating system journal. Jan 24 00:47:01.900527 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:47:01.911596 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 24 00:47:01.911617 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Jan 24 00:47:01.912296 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:47:01.924994 udevadm[1330]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:47:01.930648 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:47:01.938321 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:47:02.087454 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:47:02.097316 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:47:02.111222 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:47:02.116370 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:47:02.119469 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:47:02.137848 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Jan 24 00:47:02.137871 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Jan 24 00:47:02.143277 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:47:02.188417 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:47:02.583175 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:47:02.664421 kernel: loop1: detected capacity change from 0 to 229808 Jan 24 00:47:02.714024 kernel: loop2: detected capacity change from 0 to 31056 Jan 24 00:47:03.125951 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:47:03.134447 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:47:03.165810 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Jan 24 00:47:03.358172 kernel: loop3: detected capacity change from 0 to 142488 Jan 24 00:47:03.487949 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:47:03.522311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:47:03.563073 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:47:03.664794 kernel: hv_vmbus: registering driver hv_balloon Jan 24 00:47:03.662425 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:47:03.680190 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 24 00:47:03.718160 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:47:03.718256 kernel: hv_vmbus: registering driver hyperv_fb Jan 24 00:47:03.724200 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 24 00:47:03.731212 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 24 00:47:03.733173 kernel: Console: switching to colour dummy device 80x25 Jan 24 00:47:03.736765 kernel: Console: switching to colour frame buffer device 128x48 Jan 24 00:47:03.839924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:03.874702 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:47:03.903980 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:47:03.906257 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:03.920407 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:04.015435 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:47:04.015639 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:04.028229 kernel: loop4: detected capacity change from 0 to 140768 Jan 24 00:47:04.031476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:04.056244 kernel: loop5: detected capacity change from 0 to 229808 Jan 24 00:47:04.060173 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1371) Jan 24 00:47:04.095170 kernel: loop6: detected capacity change from 0 to 31056 Jan 24 00:47:04.113450 systemd-networkd[1368]: lo: Link UP Jan 24 00:47:04.113465 systemd-networkd[1368]: lo: Gained carrier Jan 24 00:47:04.121886 systemd-networkd[1368]: Enumeration completed Jan 24 00:47:04.122012 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:47:04.125090 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 24 00:47:04.125142 kernel: loop7: detected capacity change from 0 to 142488 Jan 24 00:47:04.132345 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:47:04.133056 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:47:04.133064 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:47:04.194413 kernel: mlx5_core 341a:00:02.0 enP13338s1: Link up Jan 24 00:47:04.196959 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 24 00:47:04.213727 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:47:04.220596 (sd-merge)[1414]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 24 00:47:04.221234 (sd-merge)[1414]: Merged extensions into '/usr'. Jan 24 00:47:04.235168 kernel: hv_netvsc 7ced8d9c-f6de-7ced-8d9c-f6de7ced8d9c eth0: Data path switched to VF: enP13338s1 Jan 24 00:47:04.239282 systemd-networkd[1368]: enP13338s1: Link UP Jan 24 00:47:04.239431 systemd-networkd[1368]: eth0: Link UP Jan 24 00:47:04.239436 systemd-networkd[1368]: eth0: Gained carrier Jan 24 00:47:04.240032 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:47:04.245049 systemd[1]: Reloading requested from client PID 1326 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:47:04.245066 systemd[1]: Reloading... Jan 24 00:47:04.245521 systemd-networkd[1368]: enP13338s1: Gained carrier Jan 24 00:47:04.281201 systemd-networkd[1368]: eth0: DHCPv4 address 10.200.4.29/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:47:04.331177 zram_generator::config[1476]: No configuration found. Jan 24 00:47:04.534204 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:47:04.616758 systemd[1]: Reloading finished in 370 ms. Jan 24 00:47:04.647693 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:47:04.651472 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:47:04.651826 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:47:04.669311 systemd[1]: Starting ensure-sysext.service... Jan 24 00:47:04.677555 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:47:04.685163 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:47:04.694320 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:04.698121 systemd[1]: Reloading requested from client PID 1540 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:47:04.699197 systemd[1]: Reloading... Jan 24 00:47:04.721127 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:47:04.721693 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:47:04.722983 systemd-tmpfiles[1543]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:47:04.723667 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Jan 24 00:47:04.723760 systemd-tmpfiles[1543]: ACLs are not supported, ignoring. Jan 24 00:47:04.783175 zram_generator::config[1578]: No configuration found. Jan 24 00:47:04.798885 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:47:04.798899 systemd-tmpfiles[1543]: Skipping /boot Jan 24 00:47:04.812321 systemd-tmpfiles[1543]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:47:04.812338 systemd-tmpfiles[1543]: Skipping /boot Jan 24 00:47:04.843179 lvm[1541]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:47:04.931435 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:47:05.009794 systemd[1]: Reloading finished in 310 ms. Jan 24 00:47:05.032636 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:47:05.036295 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:47:05.049085 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:47:05.058434 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:47:05.085622 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:47:05.091926 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:47:05.100261 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:47:05.107707 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:47:05.114808 lvm[1640]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:47:05.122522 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:47:05.129081 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:05.130392 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:47:05.137429 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:47:05.149011 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:47:05.165272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:47:05.168089 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:47:05.168286 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:05.179940 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:47:05.183543 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:47:05.183752 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:47:05.187214 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:47:05.187384 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:47:05.191016 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:47:05.191138 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:47:05.203483 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:47:05.212927 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:05.213312 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:47:05.221614 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:47:05.231054 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:47:05.241921 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:47:05.247704 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:47:05.250511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:47:05.250782 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:47:05.253625 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:05.255091 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:47:05.255910 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:47:05.265859 systemd[1]: Finished ensure-sysext.service. Jan 24 00:47:05.271496 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:47:05.271733 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:47:05.283797 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:47:05.284860 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:47:05.288412 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:47:05.288576 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:47:05.292067 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:47:05.292597 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:47:05.313507 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:47:05.325077 augenrules[1675]: No rules Jan 24 00:47:05.325906 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:47:05.333122 systemd-resolved[1642]: Positive Trust Anchors: Jan 24 00:47:05.333136 systemd-resolved[1642]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:47:05.333252 systemd-resolved[1642]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:47:05.384318 systemd-resolved[1642]: Using system hostname 'ci-4081.3.6-n-f1b70866be'. Jan 24 00:47:05.386000 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:47:05.389658 systemd[1]: Reached target network.target - Network. Jan 24 00:47:05.392232 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:47:05.587437 systemd-networkd[1368]: eth0: Gained IPv6LL Jan 24 00:47:05.590282 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:47:05.594103 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:47:05.985949 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:47:05.990261 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:47:09.824861 ldconfig[1321]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:47:09.892167 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:47:09.904349 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:47:09.915755 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:47:09.918803 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:47:09.921742 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:47:09.924747 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:47:09.928043 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:47:09.930934 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:47:09.934009 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:47:09.937131 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:47:09.937191 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:47:09.939466 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:47:09.942817 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:47:09.947066 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:47:09.956944 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:47:09.960263 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:47:09.963006 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:47:09.965371 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:47:09.967700 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:47:09.967733 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:47:09.973255 systemd[1]: Starting chronyd.service - NTP client/server... Jan 24 00:47:09.977260 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:47:09.986372 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:47:09.992326 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:47:09.998732 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:47:10.012355 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:47:10.015098 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:47:10.015173 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 24 00:47:10.020346 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 24 00:47:10.025958 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 24 00:47:10.027809 jq[1693]: false Jan 24 00:47:10.036239 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:47:10.042340 (chronyd)[1689]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 24 00:47:10.042530 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:47:10.050367 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:47:10.054267 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:47:10.062323 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:47:10.067540 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:47:10.079388 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:47:10.082471 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:47:10.083008 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:47:10.086111 KVP[1697]: KVP starting; pid is:1697 Jan 24 00:47:10.088630 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:47:10.094399 extend-filesystems[1694]: Found loop4 Jan 24 00:47:10.101092 extend-filesystems[1694]: Found loop5 Jan 24 00:47:10.101092 extend-filesystems[1694]: Found loop6 Jan 24 00:47:10.101092 extend-filesystems[1694]: Found loop7 Jan 24 00:47:10.101092 extend-filesystems[1694]: Found sda Jan 24 00:47:10.101092 extend-filesystems[1694]: Found sda1 Jan 24 00:47:10.101092 extend-filesystems[1694]: Found sda2 Jan 24 00:47:10.101092 extend-filesystems[1694]: Found sda3 Jan 24 00:47:10.101092 extend-filesystems[1694]: Found usr Jan 24 00:47:10.101092 extend-filesystems[1694]: Found sda4 Jan 24 00:47:10.101092 extend-filesystems[1694]: Found sda6 Jan 24 00:47:10.101092 extend-filesystems[1694]: Found sda7 Jan 24 00:47:10.101092 extend-filesystems[1694]: Found sda9 Jan 24 00:47:10.101092 extend-filesystems[1694]: Checking size of /dev/sda9 Jan 24 00:47:10.098869 chronyd[1711]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 24 00:47:10.104516 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:47:10.132412 KVP[1697]: KVP LIC Version: 3.1 Jan 24 00:47:10.133161 kernel: hv_utils: KVP IC version 4.0 Jan 24 00:47:10.135440 jq[1712]: true Jan 24 00:47:10.147574 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:47:10.148156 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:47:10.149553 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:47:10.151498 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:47:10.160515 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:47:10.160729 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:47:10.167533 chronyd[1711]: Timezone right/UTC failed leap second check, ignoring Jan 24 00:47:10.167747 chronyd[1711]: Loaded seccomp filter (level 2) Jan 24 00:47:10.172830 systemd[1]: Started chronyd.service - NTP client/server. Jan 24 00:47:10.188762 (ntainerd)[1724]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:47:10.196173 jq[1723]: true Jan 24 00:47:10.226258 update_engine[1710]: I20260124 00:47:10.225862 1710 main.cc:92] Flatcar Update Engine starting Jan 24 00:47:10.236452 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:47:10.243854 tar[1722]: linux-amd64/LICENSE Jan 24 00:47:10.246135 tar[1722]: linux-amd64/helm Jan 24 00:47:10.269126 extend-filesystems[1694]: Old size kept for /dev/sda9 Jan 24 00:47:10.269126 extend-filesystems[1694]: Found sr0 Jan 24 00:47:10.273689 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:47:10.275333 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:47:10.335497 systemd-logind[1708]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:47:10.344281 systemd-logind[1708]: New seat seat0. Jan 24 00:47:10.350535 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:47:10.370360 dbus-daemon[1692]: [system] SELinux support is enabled Jan 24 00:47:10.370808 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:47:10.380487 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:47:10.380529 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:47:10.383893 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:47:10.383918 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:47:10.390520 dbus-daemon[1692]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:47:10.396647 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:47:10.406289 update_engine[1710]: I20260124 00:47:10.405390 1710 update_check_scheduler.cc:74] Next update check in 6m19s Jan 24 00:47:10.410247 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:47:10.443243 bash[1760]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:47:10.457221 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1765) Jan 24 00:47:10.446199 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:47:10.455642 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:47:10.537301 coreos-metadata[1691]: Jan 24 00:47:10.537 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 24 00:47:10.540007 coreos-metadata[1691]: Jan 24 00:47:10.539 INFO Fetch successful Jan 24 00:47:10.540007 coreos-metadata[1691]: Jan 24 00:47:10.539 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 24 00:47:10.562653 coreos-metadata[1691]: Jan 24 00:47:10.561 INFO Fetch successful Jan 24 00:47:10.562653 coreos-metadata[1691]: Jan 24 00:47:10.561 INFO Fetching http://168.63.129.16/machine/2453f0a8-e19c-463e-a9ce-64341b3a7047/d2a2f2a5%2D2675%2D436a%2D84fd%2D328294839ea4.%5Fci%2D4081.3.6%2Dn%2Df1b70866be?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 24 00:47:10.565341 coreos-metadata[1691]: Jan 24 00:47:10.565 INFO Fetch successful Jan 24 00:47:10.565341 coreos-metadata[1691]: Jan 24 00:47:10.565 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 24 00:47:10.581081 coreos-metadata[1691]: Jan 24 00:47:10.578 INFO Fetch successful Jan 24 00:47:10.615172 sshd_keygen[1725]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:47:10.636659 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:47:10.643802 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:47:10.697941 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:47:10.706443 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:47:10.710805 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 24 00:47:10.728038 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:47:10.728302 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:47:10.731394 locksmithd[1770]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:47:10.750950 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:47:10.772367 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 24 00:47:10.803255 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:47:10.821497 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:47:10.832599 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:47:10.836404 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:47:11.086230 tar[1722]: linux-amd64/README.md Jan 24 00:47:11.099776 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:47:11.822274 containerd[1724]: time="2026-01-24T00:47:11.821235600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:47:11.853132 containerd[1724]: time="2026-01-24T00:47:11.853075200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:11.854686 containerd[1724]: time="2026-01-24T00:47:11.854655500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:47:11.854794 containerd[1724]: time="2026-01-24T00:47:11.854782500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:47:11.854839 containerd[1724]: time="2026-01-24T00:47:11.854830900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:47:11.855168 containerd[1724]: time="2026-01-24T00:47:11.854984000Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:47:11.855168 containerd[1724]: time="2026-01-24T00:47:11.855000200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:11.855168 containerd[1724]: time="2026-01-24T00:47:11.855049900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:47:11.855168 containerd[1724]: time="2026-01-24T00:47:11.855060300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:11.855353 containerd[1724]: time="2026-01-24T00:47:11.855270300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:47:11.855353 containerd[1724]: time="2026-01-24T00:47:11.855293400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:11.855353 containerd[1724]: time="2026-01-24T00:47:11.855312300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:47:11.855353 containerd[1724]: time="2026-01-24T00:47:11.855326100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:11.855491 containerd[1724]: time="2026-01-24T00:47:11.855443900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:11.855715 containerd[1724]: time="2026-01-24T00:47:11.855685100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:11.855848 containerd[1724]: time="2026-01-24T00:47:11.855823800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:47:11.855848 containerd[1724]: time="2026-01-24T00:47:11.855843700Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:47:11.855957 containerd[1724]: time="2026-01-24T00:47:11.855937800Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:47:11.856014 containerd[1724]: time="2026-01-24T00:47:11.855995500Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.890749300Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.890817900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.890840400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.890873700Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.890897600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.891064800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.891423700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.891568000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.891589000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.891617400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.891643200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.891664300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.891683200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:47:11.892839 containerd[1724]: time="2026-01-24T00:47:11.891703600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891722700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891740400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891758800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891775000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891800500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891818400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891835300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891854500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891871700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891899300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891926000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891947300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891964500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893401 containerd[1724]: time="2026-01-24T00:47:11.891982700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.891999600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892017400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892033700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892055700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892088200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892102400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892114400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892282200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892313700Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892420000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892439700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892453400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892469100Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:47:11.893867 containerd[1724]: time="2026-01-24T00:47:11.892495600Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:47:11.894345 containerd[1724]: time="2026-01-24T00:47:11.892512200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:47:11.894386 containerd[1724]: time="2026-01-24T00:47:11.892930300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:47:11.894386 containerd[1724]: time="2026-01-24T00:47:11.893018600Z" level=info msg="Connect containerd service" Jan 24 00:47:11.894386 containerd[1724]: time="2026-01-24T00:47:11.893078400Z" level=info msg="using legacy CRI server" Jan 24 00:47:11.894386 containerd[1724]: time="2026-01-24T00:47:11.893089500Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:47:11.894386 containerd[1724]: time="2026-01-24T00:47:11.893259600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:47:11.894386 containerd[1724]: time="2026-01-24T00:47:11.894205700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:47:11.894732 containerd[1724]: time="2026-01-24T00:47:11.894374100Z" level=info msg="Start subscribing containerd event" Jan 24 00:47:11.894732 containerd[1724]: time="2026-01-24T00:47:11.894436100Z" level=info msg="Start recovering state" Jan 24 00:47:11.894732 containerd[1724]: time="2026-01-24T00:47:11.894513100Z" level=info msg="Start event monitor" Jan 24 00:47:11.894732 containerd[1724]: time="2026-01-24T00:47:11.894531500Z" level=info msg="Start snapshots syncer" Jan 24 00:47:11.894732 containerd[1724]: time="2026-01-24T00:47:11.894542500Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:47:11.894732 containerd[1724]: time="2026-01-24T00:47:11.894552000Z" level=info msg="Start streaming server" Jan 24 00:47:11.896475 containerd[1724]: time="2026-01-24T00:47:11.895057900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:47:11.896475 containerd[1724]: time="2026-01-24T00:47:11.895209100Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:47:11.895933 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:47:11.897219 containerd[1724]: time="2026-01-24T00:47:11.897200900Z" level=info msg="containerd successfully booted in 0.077114s" Jan 24 00:47:12.049543 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:47:12.053439 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:47:12.058690 systemd[1]: Startup finished in 736ms (firmware) + 17.918s (loader) + 1.012s (kernel) + 11.524s (initrd) + 14.958s (userspace) = 46.152s. Jan 24 00:47:12.068381 (kubelet)[1849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:47:12.615387 login[1832]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 00:47:12.616954 login[1833]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 00:47:12.631483 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:47:12.639848 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:47:12.642202 systemd-logind[1708]: New session 1 of user core. Jan 24 00:47:12.646491 systemd-logind[1708]: New session 2 of user core. Jan 24 00:47:12.671439 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:47:12.680137 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:47:12.702838 (systemd)[1861]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:47:12.844171 kubelet[1849]: E0124 00:47:12.842112 1849 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:47:12.845758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:47:12.845947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:47:12.875233 systemd[1861]: Queued start job for default target default.target. Jan 24 00:47:12.884182 systemd[1861]: Created slice app.slice - User Application Slice. Jan 24 00:47:12.884237 systemd[1861]: Reached target paths.target - Paths. Jan 24 00:47:12.884254 systemd[1861]: Reached target timers.target - Timers. Jan 24 00:47:12.885477 systemd[1861]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:47:12.901557 systemd[1861]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:47:12.901681 systemd[1861]: Reached target sockets.target - Sockets. Jan 24 00:47:12.901700 systemd[1861]: Reached target basic.target - Basic System. Jan 24 00:47:12.901742 systemd[1861]: Reached target default.target - Main User Target. Jan 24 00:47:12.901776 systemd[1861]: Startup finished in 189ms. Jan 24 00:47:12.902077 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:47:12.912302 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:47:12.913691 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:47:13.353288 waagent[1830]: 2026-01-24T00:47:13.353109Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.353537Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.354509Z INFO Daemon Daemon Python: 3.11.9 Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.355613Z INFO Daemon Daemon Run daemon Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.356308Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.357036Z INFO Daemon Daemon Using waagent for provisioning Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.357636Z INFO Daemon Daemon Activate resource disk Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.358371Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.362486Z INFO Daemon Daemon Found device: None Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.363477Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.364384Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.366356Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 24 00:47:13.390001 waagent[1830]: 2026-01-24T00:47:13.367079Z INFO Daemon Daemon Running default provisioning handler Jan 24 00:47:13.392890 waagent[1830]: 2026-01-24T00:47:13.392819Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 24 00:47:13.398834 waagent[1830]: 2026-01-24T00:47:13.398782Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 24 00:47:13.406871 waagent[1830]: 2026-01-24T00:47:13.398966Z INFO Daemon Daemon cloud-init is enabled: False Jan 24 00:47:13.406871 waagent[1830]: 2026-01-24T00:47:13.399891Z INFO Daemon Daemon Copying ovf-env.xml Jan 24 00:47:13.475173 waagent[1830]: 2026-01-24T00:47:13.471434Z INFO Daemon Daemon Successfully mounted dvd Jan 24 00:47:13.543736 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 24 00:47:13.545566 waagent[1830]: 2026-01-24T00:47:13.545489Z INFO Daemon Daemon Detect protocol endpoint Jan 24 00:47:13.548980 waagent[1830]: 2026-01-24T00:47:13.548909Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 24 00:47:13.552537 waagent[1830]: 2026-01-24T00:47:13.552479Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 24 00:47:13.556526 waagent[1830]: 2026-01-24T00:47:13.556469Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 24 00:47:13.560238 waagent[1830]: 2026-01-24T00:47:13.560189Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 24 00:47:13.563501 waagent[1830]: 2026-01-24T00:47:13.563444Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 24 00:47:13.592364 waagent[1830]: 2026-01-24T00:47:13.592310Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 24 00:47:13.600813 waagent[1830]: 2026-01-24T00:47:13.593334Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 24 00:47:13.600813 waagent[1830]: 2026-01-24T00:47:13.594657Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 24 00:47:13.709716 waagent[1830]: 2026-01-24T00:47:13.709617Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 24 00:47:13.713166 waagent[1830]: 2026-01-24T00:47:13.713090Z INFO Daemon Daemon Forcing an update of the goal state. Jan 24 00:47:13.718790 waagent[1830]: 2026-01-24T00:47:13.718738Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 24 00:47:13.739024 waagent[1830]: 2026-01-24T00:47:13.738958Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Jan 24 00:47:13.750931 waagent[1830]: 2026-01-24T00:47:13.740091Z INFO Daemon Jan 24 00:47:13.750931 waagent[1830]: 2026-01-24T00:47:13.740917Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 1fba7f23-8869-4f1f-8f6c-4a54580b5bee eTag: 5715247539678195413 source: Fabric] Jan 24 00:47:13.750931 waagent[1830]: 2026-01-24T00:47:13.742033Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 24 00:47:13.750931 waagent[1830]: 2026-01-24T00:47:13.742782Z INFO Daemon Jan 24 00:47:13.750931 waagent[1830]: 2026-01-24T00:47:13.743451Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 24 00:47:13.750931 waagent[1830]: 2026-01-24T00:47:13.747981Z INFO Daemon Daemon Downloading artifacts profile blob Jan 24 00:47:13.812718 waagent[1830]: 2026-01-24T00:47:13.812644Z INFO Daemon Downloaded certificate {'thumbprint': '34FCF90823E0FBBD8A11F8367A9AA4BF493899E0', 'hasPrivateKey': True} Jan 24 00:47:13.817944 waagent[1830]: 2026-01-24T00:47:13.817883Z INFO Daemon Fetch goal state completed Jan 24 00:47:13.825649 waagent[1830]: 2026-01-24T00:47:13.825605Z INFO Daemon Daemon Starting provisioning Jan 24 00:47:13.832062 waagent[1830]: 2026-01-24T00:47:13.825803Z INFO Daemon Daemon Handle ovf-env.xml. Jan 24 00:47:13.832062 waagent[1830]: 2026-01-24T00:47:13.826621Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-f1b70866be] Jan 24 00:47:13.860006 waagent[1830]: 2026-01-24T00:47:13.859900Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-f1b70866be] Jan 24 00:47:13.864443 waagent[1830]: 2026-01-24T00:47:13.864123Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 24 00:47:13.875961 waagent[1830]: 2026-01-24T00:47:13.866342Z INFO Daemon Daemon Primary interface is [eth0] Jan 24 00:47:13.899818 systemd-networkd[1368]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:47:13.899828 systemd-networkd[1368]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:47:13.899878 systemd-networkd[1368]: eth0: DHCP lease lost Jan 24 00:47:13.901454 waagent[1830]: 2026-01-24T00:47:13.901375Z INFO Daemon Daemon Create user account if not exists Jan 24 00:47:13.919058 waagent[1830]: 2026-01-24T00:47:13.904997Z INFO Daemon Daemon User core already exists, skip useradd Jan 24 00:47:13.919058 waagent[1830]: 2026-01-24T00:47:13.905134Z INFO Daemon Daemon Configure sudoer Jan 24 00:47:13.919058 waagent[1830]: 2026-01-24T00:47:13.905471Z INFO Daemon Daemon Configure sshd Jan 24 00:47:13.919058 waagent[1830]: 2026-01-24T00:47:13.906960Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 24 00:47:13.919058 waagent[1830]: 2026-01-24T00:47:13.907955Z INFO Daemon Daemon Deploy ssh public key. Jan 24 00:47:13.919220 systemd-networkd[1368]: eth0: DHCPv6 lease lost Jan 24 00:47:13.960209 systemd-networkd[1368]: eth0: DHCPv4 address 10.200.4.29/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:47:22.860356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:47:22.865403 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:47:22.974048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:47:22.985052 (kubelet)[1919]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:47:23.700111 kubelet[1919]: E0124 00:47:23.700059 1919 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:47:23.703968 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:47:23.704184 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:47:33.859964 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:47:33.866384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:47:33.958693 chronyd[1711]: Selected source PHC0 Jan 24 00:47:34.226810 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:47:34.231230 (kubelet)[1934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:47:34.648608 kubelet[1934]: E0124 00:47:34.648551 1934 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:47:34.651128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:47:34.651361 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:47:43.998627 waagent[1830]: 2026-01-24T00:47:43.998528Z INFO Daemon Daemon Provisioning complete Jan 24 00:47:44.009087 waagent[1830]: 2026-01-24T00:47:44.009036Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 24 00:47:44.016027 waagent[1830]: 2026-01-24T00:47:44.009451Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 24 00:47:44.016027 waagent[1830]: 2026-01-24T00:47:44.010394Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 24 00:47:44.133222 waagent[1940]: 2026-01-24T00:47:44.133112Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 24 00:47:44.133644 waagent[1940]: 2026-01-24T00:47:44.133297Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 24 00:47:44.133644 waagent[1940]: 2026-01-24T00:47:44.133385Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 24 00:47:44.186568 waagent[1940]: 2026-01-24T00:47:44.186480Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 24 00:47:44.186802 waagent[1940]: 2026-01-24T00:47:44.186748Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 24 00:47:44.186898 waagent[1940]: 2026-01-24T00:47:44.186861Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 24 00:47:44.194254 waagent[1940]: 2026-01-24T00:47:44.194193Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 24 00:47:44.204019 waagent[1940]: 2026-01-24T00:47:44.203962Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Jan 24 00:47:44.204518 waagent[1940]: 2026-01-24T00:47:44.204461Z INFO ExtHandler Jan 24 00:47:44.204593 waagent[1940]: 2026-01-24T00:47:44.204556Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 94551c0b-8307-4b01-9a23-0419ea5f5640 eTag: 5715247539678195413 source: Fabric] Jan 24 00:47:44.204917 waagent[1940]: 2026-01-24T00:47:44.204865Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 24 00:47:44.205510 waagent[1940]: 2026-01-24T00:47:44.205452Z INFO ExtHandler Jan 24 00:47:44.205586 waagent[1940]: 2026-01-24T00:47:44.205541Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 24 00:47:44.208371 waagent[1940]: 2026-01-24T00:47:44.208331Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 24 00:47:44.278694 waagent[1940]: 2026-01-24T00:47:44.278564Z INFO ExtHandler Downloaded certificate {'thumbprint': '34FCF90823E0FBBD8A11F8367A9AA4BF493899E0', 'hasPrivateKey': True} Jan 24 00:47:44.279229 waagent[1940]: 2026-01-24T00:47:44.279098Z INFO ExtHandler Fetch goal state completed Jan 24 00:47:44.291944 waagent[1940]: 2026-01-24T00:47:44.291882Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1940 Jan 24 00:47:44.292096 waagent[1940]: 2026-01-24T00:47:44.292046Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 24 00:47:44.293666 waagent[1940]: 2026-01-24T00:47:44.293609Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 24 00:47:44.294029 waagent[1940]: 2026-01-24T00:47:44.293979Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 24 00:47:44.351471 waagent[1940]: 2026-01-24T00:47:44.351418Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 24 00:47:44.351719 waagent[1940]: 2026-01-24T00:47:44.351661Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 24 00:47:44.358761 waagent[1940]: 2026-01-24T00:47:44.358665Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 24 00:47:44.365429 systemd[1]: Reloading requested from client PID 1953 ('systemctl') (unit waagent.service)... Jan 24 00:47:44.365445 systemd[1]: Reloading... Jan 24 00:47:44.444209 zram_generator::config[1983]: No configuration found. Jan 24 00:47:44.582206 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:47:44.662782 systemd[1]: Reloading finished in 296 ms. Jan 24 00:47:44.690174 waagent[1940]: 2026-01-24T00:47:44.689171Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 24 00:47:44.694348 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 00:47:44.701359 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:47:44.705876 systemd[1]: Reloading requested from client PID 2044 ('systemctl') (unit waagent.service)... Jan 24 00:47:44.705894 systemd[1]: Reloading... Jan 24 00:47:44.819179 zram_generator::config[2084]: No configuration found. Jan 24 00:47:44.960988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:47:45.048229 systemd[1]: Reloading finished in 341 ms. Jan 24 00:47:45.077896 waagent[1940]: 2026-01-24T00:47:45.076797Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 24 00:47:45.077896 waagent[1940]: 2026-01-24T00:47:45.076995Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 24 00:47:45.443298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:47:45.448129 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:47:45.486086 kubelet[2148]: E0124 00:47:45.486005 2148 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:47:45.488601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:47:45.488796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:47:45.716851 waagent[1940]: 2026-01-24T00:47:45.716693Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 24 00:47:45.717615 waagent[1940]: 2026-01-24T00:47:45.717540Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 24 00:47:45.718543 waagent[1940]: 2026-01-24T00:47:45.718471Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 24 00:47:45.719484 waagent[1940]: 2026-01-24T00:47:45.719401Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 24 00:47:45.719901 waagent[1940]: 2026-01-24T00:47:45.719831Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 24 00:47:45.720041 waagent[1940]: 2026-01-24T00:47:45.719961Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 24 00:47:45.720430 waagent[1940]: 2026-01-24T00:47:45.720339Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 24 00:47:45.720888 waagent[1940]: 2026-01-24T00:47:45.720795Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 24 00:47:45.721029 waagent[1940]: 2026-01-24T00:47:45.720981Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 24 00:47:45.721298 waagent[1940]: 2026-01-24T00:47:45.721237Z INFO EnvHandler ExtHandler Configure routes Jan 24 00:47:45.721825 waagent[1940]: 2026-01-24T00:47:45.721634Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 24 00:47:45.721825 waagent[1940]: 2026-01-24T00:47:45.721756Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 24 00:47:45.721957 waagent[1940]: 2026-01-24T00:47:45.721830Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 24 00:47:45.722700 waagent[1940]: 2026-01-24T00:47:45.722647Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 24 00:47:45.723089 waagent[1940]: 2026-01-24T00:47:45.723026Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 24 00:47:45.723225 waagent[1940]: 2026-01-24T00:47:45.723126Z INFO EnvHandler ExtHandler Gateway:None Jan 24 00:47:45.723771 waagent[1940]: 2026-01-24T00:47:45.723645Z INFO EnvHandler ExtHandler Routes:None Jan 24 00:47:45.725297 waagent[1940]: 2026-01-24T00:47:45.725197Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 24 00:47:45.725297 waagent[1940]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 24 00:47:45.725297 waagent[1940]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 24 00:47:45.725297 waagent[1940]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 24 00:47:45.725297 waagent[1940]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 24 00:47:45.725297 waagent[1940]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 24 00:47:45.725297 waagent[1940]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 24 00:47:45.731174 waagent[1940]: 2026-01-24T00:47:45.729688Z INFO ExtHandler ExtHandler Jan 24 00:47:45.732254 waagent[1940]: 2026-01-24T00:47:45.732213Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: c4d2a35e-0df2-4025-974b-2f4e3bf2fa91 correlation 644954bc-62c1-4696-ba25-ea6568dd9b03 created: 2026-01-24T00:46:16.074921Z] Jan 24 00:47:45.732720 waagent[1940]: 2026-01-24T00:47:45.732665Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 24 00:47:45.734176 waagent[1940]: 2026-01-24T00:47:45.733446Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jan 24 00:47:45.771736 waagent[1940]: 2026-01-24T00:47:45.771673Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D2B9A1AD-DA21-4FF0-BC59-14584C726194;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 24 00:47:45.802938 waagent[1940]: 2026-01-24T00:47:45.802868Z INFO MonitorHandler ExtHandler Network interfaces: Jan 24 00:47:45.802938 waagent[1940]: Executing ['ip', '-a', '-o', 'link']: Jan 24 00:47:45.802938 waagent[1940]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 24 00:47:45.802938 waagent[1940]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:9c:f6:de brd ff:ff:ff:ff:ff:ff Jan 24 00:47:45.802938 waagent[1940]: 3: enP13338s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:9c:f6:de brd ff:ff:ff:ff:ff:ff\ altname enP13338p0s2 Jan 24 00:47:45.802938 waagent[1940]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 24 00:47:45.802938 waagent[1940]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 24 00:47:45.802938 waagent[1940]: 2: eth0 inet 10.200.4.29/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 24 00:47:45.802938 waagent[1940]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 24 00:47:45.802938 waagent[1940]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 24 00:47:45.802938 waagent[1940]: 2: eth0 inet6 fe80::7eed:8dff:fe9c:f6de/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 24 00:47:45.844995 waagent[1940]: 2026-01-24T00:47:45.844931Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 24 00:47:45.844995 waagent[1940]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:47:45.844995 waagent[1940]: pkts bytes target prot opt in out source destination Jan 24 00:47:45.844995 waagent[1940]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:47:45.844995 waagent[1940]: pkts bytes target prot opt in out source destination Jan 24 00:47:45.844995 waagent[1940]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:47:45.844995 waagent[1940]: pkts bytes target prot opt in out source destination Jan 24 00:47:45.844995 waagent[1940]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 24 00:47:45.844995 waagent[1940]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 24 00:47:45.844995 waagent[1940]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 24 00:47:45.848304 waagent[1940]: 2026-01-24T00:47:45.848245Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 24 00:47:45.848304 waagent[1940]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:47:45.848304 waagent[1940]: pkts bytes target prot opt in out source destination Jan 24 00:47:45.848304 waagent[1940]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:47:45.848304 waagent[1940]: pkts bytes target prot opt in out source destination Jan 24 00:47:45.848304 waagent[1940]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:47:45.848304 waagent[1940]: pkts bytes target prot opt in out source destination Jan 24 00:47:45.848304 waagent[1940]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 24 00:47:45.848304 waagent[1940]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 24 00:47:45.848304 waagent[1940]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 24 00:47:45.848695 waagent[1940]: 2026-01-24T00:47:45.848553Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 24 00:47:51.780205 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 24 00:47:55.604808 update_engine[1710]: I20260124 00:47:55.604702 1710 update_attempter.cc:509] Updating boot flags... Jan 24 00:47:55.610137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 00:47:55.617311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:47:55.678180 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2199) Jan 24 00:47:56.314202 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2202) Jan 24 00:47:56.408469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:47:56.412918 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:47:56.482166 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2202) Jan 24 00:47:56.489490 kubelet[2252]: E0124 00:47:56.489423 2252 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:47:56.492752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:47:56.492957 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:48:02.516516 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:48:02.521431 systemd[1]: Started sshd@0-10.200.4.29:22-10.200.16.10:51126.service - OpenSSH per-connection server daemon (10.200.16.10:51126). Jan 24 00:48:03.204543 sshd[2293]: Accepted publickey for core from 10.200.16.10 port 51126 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:03.206110 sshd[2293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:03.210202 systemd-logind[1708]: New session 3 of user core. Jan 24 00:48:03.224326 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:48:03.732056 systemd[1]: Started sshd@1-10.200.4.29:22-10.200.16.10:51132.service - OpenSSH per-connection server daemon (10.200.16.10:51132). Jan 24 00:48:04.329744 sshd[2298]: Accepted publickey for core from 10.200.16.10 port 51132 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:04.331274 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:04.336204 systemd-logind[1708]: New session 4 of user core. Jan 24 00:48:04.345315 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:48:04.757590 sshd[2298]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:04.761354 systemd[1]: sshd@1-10.200.4.29:22-10.200.16.10:51132.service: Deactivated successfully. Jan 24 00:48:04.763332 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:48:04.764022 systemd-logind[1708]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:48:04.764954 systemd-logind[1708]: Removed session 4. Jan 24 00:48:04.864092 systemd[1]: Started sshd@2-10.200.4.29:22-10.200.16.10:51146.service - OpenSSH per-connection server daemon (10.200.16.10:51146). Jan 24 00:48:05.466725 sshd[2305]: Accepted publickey for core from 10.200.16.10 port 51146 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:05.468552 sshd[2305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:05.474602 systemd-logind[1708]: New session 5 of user core. Jan 24 00:48:05.480331 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:48:05.890809 sshd[2305]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:05.894570 systemd[1]: sshd@2-10.200.4.29:22-10.200.16.10:51146.service: Deactivated successfully. Jan 24 00:48:05.896664 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:48:05.897338 systemd-logind[1708]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:48:05.898298 systemd-logind[1708]: Removed session 5. Jan 24 00:48:05.997044 systemd[1]: Started sshd@3-10.200.4.29:22-10.200.16.10:51156.service - OpenSSH per-connection server daemon (10.200.16.10:51156). Jan 24 00:48:06.599129 sshd[2312]: Accepted publickey for core from 10.200.16.10 port 51156 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:06.600610 sshd[2312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:06.601499 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 24 00:48:06.608557 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:06.615593 systemd-logind[1708]: New session 6 of user core. Jan 24 00:48:06.616269 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:48:06.973681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:06.978340 (kubelet)[2324]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:48:07.016874 kubelet[2324]: E0124 00:48:07.016780 2324 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:48:07.019421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:48:07.019644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:48:07.029430 sshd[2312]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:07.032190 systemd[1]: sshd@3-10.200.4.29:22-10.200.16.10:51156.service: Deactivated successfully. Jan 24 00:48:07.036001 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:48:07.037802 systemd-logind[1708]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:48:07.038791 systemd-logind[1708]: Removed session 6. Jan 24 00:48:07.135365 systemd[1]: Started sshd@4-10.200.4.29:22-10.200.16.10:51158.service - OpenSSH per-connection server daemon (10.200.16.10:51158). Jan 24 00:48:07.736904 sshd[2334]: Accepted publickey for core from 10.200.16.10 port 51158 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:07.740519 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:07.745521 systemd-logind[1708]: New session 7 of user core. Jan 24 00:48:07.754311 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:48:08.198386 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:48:08.198750 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:48:08.230584 sudo[2337]: pam_unix(sudo:session): session closed for user root Jan 24 00:48:08.326419 sshd[2334]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:08.331082 systemd[1]: sshd@4-10.200.4.29:22-10.200.16.10:51158.service: Deactivated successfully. Jan 24 00:48:08.332918 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:48:08.333716 systemd-logind[1708]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:48:08.334702 systemd-logind[1708]: Removed session 7. Jan 24 00:48:08.434640 systemd[1]: Started sshd@5-10.200.4.29:22-10.200.16.10:51168.service - OpenSSH per-connection server daemon (10.200.16.10:51168). Jan 24 00:48:09.048859 sshd[2342]: Accepted publickey for core from 10.200.16.10 port 51168 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:09.050693 sshd[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:09.057351 systemd-logind[1708]: New session 8 of user core. Jan 24 00:48:09.065316 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:48:09.384016 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:48:09.384392 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:48:09.387493 sudo[2346]: pam_unix(sudo:session): session closed for user root Jan 24 00:48:09.392439 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:48:09.392779 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:48:09.402464 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:48:09.406964 auditctl[2349]: No rules Jan 24 00:48:09.407354 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:48:09.407549 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:48:09.410687 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:48:09.445267 augenrules[2367]: No rules Jan 24 00:48:09.446670 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:48:09.447852 sudo[2345]: pam_unix(sudo:session): session closed for user root Jan 24 00:48:09.545450 sshd[2342]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:09.549605 systemd[1]: sshd@5-10.200.4.29:22-10.200.16.10:51168.service: Deactivated successfully. Jan 24 00:48:09.551386 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:48:09.552084 systemd-logind[1708]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:48:09.553031 systemd-logind[1708]: Removed session 8. Jan 24 00:48:09.651932 systemd[1]: Started sshd@6-10.200.4.29:22-10.200.16.10:47622.service - OpenSSH per-connection server daemon (10.200.16.10:47622). Jan 24 00:48:10.258471 sshd[2375]: Accepted publickey for core from 10.200.16.10 port 47622 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:10.259921 sshd[2375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:10.264829 systemd-logind[1708]: New session 9 of user core. Jan 24 00:48:10.270314 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:48:10.592735 sudo[2378]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:48:10.593099 sudo[2378]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:48:11.771455 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:48:11.772922 (dockerd)[2394]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:48:13.125410 dockerd[2394]: time="2026-01-24T00:48:13.125346328Z" level=info msg="Starting up" Jan 24 00:48:13.565808 dockerd[2394]: time="2026-01-24T00:48:13.565540596Z" level=info msg="Loading containers: start." Jan 24 00:48:13.720300 kernel: Initializing XFRM netlink socket Jan 24 00:48:13.841439 systemd-networkd[1368]: docker0: Link UP Jan 24 00:48:13.866399 dockerd[2394]: time="2026-01-24T00:48:13.866359428Z" level=info msg="Loading containers: done." Jan 24 00:48:13.961400 dockerd[2394]: time="2026-01-24T00:48:13.961351044Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:48:13.961573 dockerd[2394]: time="2026-01-24T00:48:13.961482945Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:48:13.961621 dockerd[2394]: time="2026-01-24T00:48:13.961611747Z" level=info msg="Daemon has completed initialization" Jan 24 00:48:14.021811 dockerd[2394]: time="2026-01-24T00:48:14.021754953Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:48:14.021916 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:48:15.018071 containerd[1724]: time="2026-01-24T00:48:15.018019714Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 24 00:48:15.687790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount140765014.mount: Deactivated successfully. Jan 24 00:48:17.110661 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 24 00:48:17.120258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:17.284294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:17.294520 (kubelet)[2597]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:48:17.913662 kubelet[2597]: E0124 00:48:17.913608 2597 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:48:17.916386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:48:17.916600 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:48:17.953551 containerd[1724]: time="2026-01-24T00:48:17.953495272Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:17.960464 containerd[1724]: time="2026-01-24T00:48:17.960406052Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=30114720" Jan 24 00:48:17.966261 containerd[1724]: time="2026-01-24T00:48:17.966210620Z" level=info msg="ImageCreate event name:\"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:17.971610 containerd[1724]: time="2026-01-24T00:48:17.971562083Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:17.973760 containerd[1724]: time="2026-01-24T00:48:17.972612295Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"30111311\" in 2.954541981s" Jan 24 00:48:17.973760 containerd[1724]: time="2026-01-24T00:48:17.972668696Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:021d1ceeffb11df7a9fb9adfa0ad0a30dcd13cb3d630022066f184cdcb93731b\"" Jan 24 00:48:17.973760 containerd[1724]: time="2026-01-24T00:48:17.973712108Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 24 00:48:19.596225 containerd[1724]: time="2026-01-24T00:48:19.596171198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:19.600853 containerd[1724]: time="2026-01-24T00:48:19.600807852Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=26016789" Jan 24 00:48:19.604595 containerd[1724]: time="2026-01-24T00:48:19.604561696Z" level=info msg="ImageCreate event name:\"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:19.609716 containerd[1724]: time="2026-01-24T00:48:19.609676156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:19.611721 containerd[1724]: time="2026-01-24T00:48:19.611107773Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"27673815\" in 1.637371665s" Jan 24 00:48:19.611721 containerd[1724]: time="2026-01-24T00:48:19.611141473Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:29c7cab9d8e681d047281fd3711baf13c28f66923480fb11c8f22ddb7ca742d1\"" Jan 24 00:48:19.612016 containerd[1724]: time="2026-01-24T00:48:19.611992683Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 24 00:48:20.965295 containerd[1724]: time="2026-01-24T00:48:20.965233922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:20.970239 containerd[1724]: time="2026-01-24T00:48:20.970170380Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=20158110" Jan 24 00:48:20.973524 containerd[1724]: time="2026-01-24T00:48:20.973462518Z" level=info msg="ImageCreate event name:\"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:20.978820 containerd[1724]: time="2026-01-24T00:48:20.978769180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:20.979891 containerd[1724]: time="2026-01-24T00:48:20.979757492Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"21815154\" in 1.367662608s" Jan 24 00:48:20.979891 containerd[1724]: time="2026-01-24T00:48:20.979795092Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:f457f6fcd712acb5b9beef873f6f4a4869182f9eb52ea6e24824fd4ac4eed393\"" Jan 24 00:48:20.980826 containerd[1724]: time="2026-01-24T00:48:20.980799904Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 24 00:48:22.011383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990065799.mount: Deactivated successfully. Jan 24 00:48:28.110026 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 24 00:48:28.117392 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:28.286241 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:28.293456 (kubelet)[2623]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:48:28.901393 kubelet[2623]: E0124 00:48:28.901321 2623 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:48:28.903869 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:48:28.904086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:48:35.830096 containerd[1724]: time="2026-01-24T00:48:35.829973256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:35.833962 containerd[1724]: time="2026-01-24T00:48:35.833919711Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=31930104" Jan 24 00:48:35.837541 containerd[1724]: time="2026-01-24T00:48:35.837489261Z" level=info msg="ImageCreate event name:\"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:35.842648 containerd[1724]: time="2026-01-24T00:48:35.842598633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:35.843342 containerd[1724]: time="2026-01-24T00:48:35.843182442Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"31929115\" in 14.862349137s" Jan 24 00:48:35.843342 containerd[1724]: time="2026-01-24T00:48:35.843221742Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:0929027b17fc30cb9de279f3bdba4e130b991a1dab7978a7db2e5feb2091853c\"" Jan 24 00:48:35.844108 containerd[1724]: time="2026-01-24T00:48:35.843817151Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 24 00:48:37.980394 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3323512480.mount: Deactivated successfully. Jan 24 00:48:39.110479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 24 00:48:39.117516 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:39.273397 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:39.279099 (kubelet)[2694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:48:39.879874 kubelet[2694]: E0124 00:48:39.879815 2694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:48:39.882835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:48:39.883102 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:48:39.922154 containerd[1724]: time="2026-01-24T00:48:39.922096011Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:39.926023 containerd[1724]: time="2026-01-24T00:48:39.925950959Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Jan 24 00:48:39.935331 containerd[1724]: time="2026-01-24T00:48:39.935281674Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:39.943420 containerd[1724]: time="2026-01-24T00:48:39.943362074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:39.944719 containerd[1724]: time="2026-01-24T00:48:39.944481988Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 4.100629437s" Jan 24 00:48:39.944719 containerd[1724]: time="2026-01-24T00:48:39.944521489Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Jan 24 00:48:39.945380 containerd[1724]: time="2026-01-24T00:48:39.945203397Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:48:40.447230 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561787751.mount: Deactivated successfully. Jan 24 00:48:40.470605 containerd[1724]: time="2026-01-24T00:48:40.470554794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:40.476571 containerd[1724]: time="2026-01-24T00:48:40.476508467Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 24 00:48:40.483834 containerd[1724]: time="2026-01-24T00:48:40.483785857Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:40.490416 containerd[1724]: time="2026-01-24T00:48:40.490366539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:40.491062 containerd[1724]: time="2026-01-24T00:48:40.491028047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 545.79095ms" Jan 24 00:48:40.491162 containerd[1724]: time="2026-01-24T00:48:40.491067947Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:48:40.491845 containerd[1724]: time="2026-01-24T00:48:40.491813757Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 24 00:48:40.996384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3768035361.mount: Deactivated successfully. Jan 24 00:48:43.334551 containerd[1724]: time="2026-01-24T00:48:43.334494610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:43.337541 containerd[1724]: time="2026-01-24T00:48:43.337483547Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58926235" Jan 24 00:48:43.340752 containerd[1724]: time="2026-01-24T00:48:43.340696187Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:43.345655 containerd[1724]: time="2026-01-24T00:48:43.345209243Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:43.346709 containerd[1724]: time="2026-01-24T00:48:43.346675961Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.854832204s" Jan 24 00:48:43.346709 containerd[1724]: time="2026-01-24T00:48:43.346706561Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Jan 24 00:48:45.999711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:46.007616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:46.045356 systemd[1]: Reloading requested from client PID 2788 ('systemctl') (unit session-9.scope)... Jan 24 00:48:46.045376 systemd[1]: Reloading... Jan 24 00:48:46.167180 zram_generator::config[2837]: No configuration found. Jan 24 00:48:46.304376 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:48:46.388078 systemd[1]: Reloading finished in 342 ms. Jan 24 00:48:46.436529 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:46.441880 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:46.443639 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:48:46.443876 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:46.449618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:46.818644 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:46.831494 (kubelet)[2900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:48:46.869710 kubelet[2900]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:48:46.871198 kubelet[2900]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:48:46.871198 kubelet[2900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:48:46.871198 kubelet[2900]: I0124 00:48:46.870305 2900 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:48:47.026910 kubelet[2900]: I0124 00:48:47.026866 2900 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:48:47.026910 kubelet[2900]: I0124 00:48:47.026897 2900 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:48:47.027299 kubelet[2900]: I0124 00:48:47.027275 2900 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:48:47.588187 kubelet[2900]: E0124 00:48:47.587954 2900 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.29:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:48:47.588466 kubelet[2900]: I0124 00:48:47.588440 2900 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:48:47.623084 kubelet[2900]: E0124 00:48:47.623029 2900 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:48:47.623084 kubelet[2900]: I0124 00:48:47.623092 2900 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:48:47.627612 kubelet[2900]: I0124 00:48:47.627584 2900 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:48:47.627853 kubelet[2900]: I0124 00:48:47.627823 2900 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:48:47.628026 kubelet[2900]: I0124 00:48:47.627850 2900 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-f1b70866be","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:48:47.628205 kubelet[2900]: I0124 00:48:47.628034 2900 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:48:47.628205 kubelet[2900]: I0124 00:48:47.628047 2900 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:48:47.628299 kubelet[2900]: I0124 00:48:47.628208 2900 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:48:47.632448 kubelet[2900]: I0124 00:48:47.631897 2900 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:48:47.632448 kubelet[2900]: I0124 00:48:47.632021 2900 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:48:47.632448 kubelet[2900]: I0124 00:48:47.632059 2900 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:48:47.632448 kubelet[2900]: I0124 00:48:47.632104 2900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:48:47.682746 kubelet[2900]: E0124 00:48:47.681867 2900 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-f1b70866be&limit=500&resourceVersion=0\": dial tcp 10.200.4.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:48:47.682746 kubelet[2900]: E0124 00:48:47.682005 2900 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:48:47.682746 kubelet[2900]: I0124 00:48:47.682139 2900 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:48:47.682958 kubelet[2900]: I0124 00:48:47.682809 2900 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:48:47.684173 kubelet[2900]: W0124 00:48:47.684028 2900 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:48:47.688446 kubelet[2900]: I0124 00:48:47.688425 2900 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:48:47.688540 kubelet[2900]: I0124 00:48:47.688485 2900 server.go:1289] "Started kubelet" Jan 24 00:48:47.690174 kubelet[2900]: I0124 00:48:47.689275 2900 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:48:47.690745 kubelet[2900]: I0124 00:48:47.690724 2900 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:48:47.692623 kubelet[2900]: I0124 00:48:47.690729 2900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:48:47.700137 kubelet[2900]: I0124 00:48:47.700091 2900 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:48:47.700925 kubelet[2900]: I0124 00:48:47.700898 2900 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:48:47.701671 kubelet[2900]: I0124 00:48:47.701646 2900 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:48:47.701894 kubelet[2900]: E0124 00:48:47.701870 2900 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f1b70866be\" not found" Jan 24 00:48:47.704190 kubelet[2900]: I0124 00:48:47.704164 2900 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:48:47.704262 kubelet[2900]: I0124 00:48:47.704228 2900 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:48:47.716525 kubelet[2900]: E0124 00:48:47.716496 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f1b70866be?timeout=10s\": dial tcp 10.200.4.29:6443: connect: connection refused" interval="200ms" Jan 24 00:48:47.717312 kubelet[2900]: E0124 00:48:47.717284 2900 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:48:47.719175 kubelet[2900]: E0124 00:48:47.718993 2900 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:48:47.719175 kubelet[2900]: I0124 00:48:47.719130 2900 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:48:47.719175 kubelet[2900]: I0124 00:48:47.719157 2900 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:48:47.719327 kubelet[2900]: I0124 00:48:47.719232 2900 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:48:47.737752 kubelet[2900]: I0124 00:48:47.736326 2900 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:48:47.749279 kubelet[2900]: I0124 00:48:47.749221 2900 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:48:47.751676 kubelet[2900]: I0124 00:48:47.751647 2900 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:48:47.751676 kubelet[2900]: I0124 00:48:47.751668 2900 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:48:47.751796 kubelet[2900]: I0124 00:48:47.751720 2900 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:48:47.751796 kubelet[2900]: I0124 00:48:47.751732 2900 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:48:47.751796 kubelet[2900]: E0124 00:48:47.751775 2900 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:48:47.759345 kubelet[2900]: E0124 00:48:47.759292 2900 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:48:47.760955 kubelet[2900]: E0124 00:48:47.759468 2900 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.29:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.29:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-f1b70866be.188d8454a3e776fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-f1b70866be,UID:ci-4081.3.6-n-f1b70866be,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-f1b70866be,},FirstTimestamp:2026-01-24 00:48:47.688447741 +0000 UTC m=+0.853432196,LastTimestamp:2026-01-24 00:48:47.688447741 +0000 UTC m=+0.853432196,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-f1b70866be,}" Jan 24 00:48:47.761140 kubelet[2900]: I0124 00:48:47.760940 2900 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:48:47.761140 kubelet[2900]: I0124 00:48:47.761134 2900 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:48:47.761140 kubelet[2900]: I0124 00:48:47.761167 2900 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:48:47.767643 kubelet[2900]: I0124 00:48:47.767618 2900 policy_none.go:49] "None policy: Start" Jan 24 00:48:47.767643 kubelet[2900]: I0124 00:48:47.767641 2900 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:48:47.767761 kubelet[2900]: I0124 00:48:47.767654 2900 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:48:47.784473 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:48:47.794994 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:48:47.799301 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:48:47.802475 kubelet[2900]: E0124 00:48:47.802448 2900 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-f1b70866be\" not found" Jan 24 00:48:47.805602 kubelet[2900]: E0124 00:48:47.804925 2900 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:48:47.805602 kubelet[2900]: I0124 00:48:47.805176 2900 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:48:47.805602 kubelet[2900]: I0124 00:48:47.805191 2900 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:48:47.806940 kubelet[2900]: I0124 00:48:47.806914 2900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:48:47.807957 kubelet[2900]: E0124 00:48:47.807852 2900 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:48:47.807957 kubelet[2900]: E0124 00:48:47.807894 2900 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-f1b70866be\" not found" Jan 24 00:48:47.907867 kubelet[2900]: I0124 00:48:47.907835 2900 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:47.941338 kubelet[2900]: E0124 00:48:47.908321 2900 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.29:6443/api/v1/nodes\": dial tcp 10.200.4.29:6443: connect: connection refused" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:47.941338 kubelet[2900]: E0124 00:48:47.916884 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f1b70866be?timeout=10s\": dial tcp 10.200.4.29:6443: connect: connection refused" interval="400ms" Jan 24 00:48:47.954035 systemd[1]: Created slice kubepods-burstable-pod310c07cb90666d5e9f71d5f09cdac787.slice - libcontainer container kubepods-burstable-pod310c07cb90666d5e9f71d5f09cdac787.slice. Jan 24 00:48:47.969532 kubelet[2900]: E0124 00:48:47.969502 2900 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f1b70866be\" not found" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:47.973308 systemd[1]: Created slice kubepods-burstable-podd5e70be1203645f91d7ade6071aa8603.slice - libcontainer container kubepods-burstable-podd5e70be1203645f91d7ade6071aa8603.slice. Jan 24 00:48:47.974846 kubelet[2900]: E0124 00:48:47.974817 2900 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f1b70866be\" not found" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.006517 kubelet[2900]: I0124 00:48:48.006059 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/310c07cb90666d5e9f71d5f09cdac787-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f1b70866be\" (UID: \"310c07cb90666d5e9f71d5f09cdac787\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.006517 kubelet[2900]: I0124 00:48:48.006095 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5e70be1203645f91d7ade6071aa8603-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" (UID: \"d5e70be1203645f91d7ade6071aa8603\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.006517 kubelet[2900]: I0124 00:48:48.006138 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/310c07cb90666d5e9f71d5f09cdac787-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f1b70866be\" (UID: \"310c07cb90666d5e9f71d5f09cdac787\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.006517 kubelet[2900]: I0124 00:48:48.006183 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/310c07cb90666d5e9f71d5f09cdac787-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-f1b70866be\" (UID: \"310c07cb90666d5e9f71d5f09cdac787\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.006517 kubelet[2900]: I0124 00:48:48.006209 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5e70be1203645f91d7ade6071aa8603-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" (UID: \"d5e70be1203645f91d7ade6071aa8603\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.006785 kubelet[2900]: I0124 00:48:48.006249 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d5e70be1203645f91d7ade6071aa8603-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" (UID: \"d5e70be1203645f91d7ade6071aa8603\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.006785 kubelet[2900]: I0124 00:48:48.006273 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5e70be1203645f91d7ade6071aa8603-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" (UID: \"d5e70be1203645f91d7ade6071aa8603\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.006785 kubelet[2900]: I0124 00:48:48.006298 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5e70be1203645f91d7ade6071aa8603-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" (UID: \"d5e70be1203645f91d7ade6071aa8603\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.006785 kubelet[2900]: I0124 00:48:48.006337 2900 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b3d1836ea2aa69dd2dd7f32983c2a10-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-f1b70866be\" (UID: \"7b3d1836ea2aa69dd2dd7f32983c2a10\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.006693 systemd[1]: Created slice kubepods-burstable-pod7b3d1836ea2aa69dd2dd7f32983c2a10.slice - libcontainer container kubepods-burstable-pod7b3d1836ea2aa69dd2dd7f32983c2a10.slice. Jan 24 00:48:48.008691 kubelet[2900]: E0124 00:48:48.008670 2900 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f1b70866be\" not found" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.111054 kubelet[2900]: I0124 00:48:48.110987 2900 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.111467 kubelet[2900]: E0124 00:48:48.111428 2900 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.29:6443/api/v1/nodes\": dial tcp 10.200.4.29:6443: connect: connection refused" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.271571 containerd[1724]: time="2026-01-24T00:48:48.271440811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-f1b70866be,Uid:310c07cb90666d5e9f71d5f09cdac787,Namespace:kube-system,Attempt:0,}" Jan 24 00:48:48.275933 containerd[1724]: time="2026-01-24T00:48:48.275891865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-f1b70866be,Uid:d5e70be1203645f91d7ade6071aa8603,Namespace:kube-system,Attempt:0,}" Jan 24 00:48:48.310069 containerd[1724]: time="2026-01-24T00:48:48.310030385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-f1b70866be,Uid:7b3d1836ea2aa69dd2dd7f32983c2a10,Namespace:kube-system,Attempt:0,}" Jan 24 00:48:48.317766 kubelet[2900]: E0124 00:48:48.317734 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f1b70866be?timeout=10s\": dial tcp 10.200.4.29:6443: connect: connection refused" interval="800ms" Jan 24 00:48:48.513912 kubelet[2900]: I0124 00:48:48.513872 2900 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.514277 kubelet[2900]: E0124 00:48:48.514242 2900 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.29:6443/api/v1/nodes\": dial tcp 10.200.4.29:6443: connect: connection refused" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:48.798582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707611458.mount: Deactivated successfully. Jan 24 00:48:48.822710 containerd[1724]: time="2026-01-24T00:48:48.822665889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:48:48.825568 containerd[1724]: time="2026-01-24T00:48:48.825514724Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 24 00:48:48.830682 containerd[1724]: time="2026-01-24T00:48:48.830643088Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:48:48.833292 kubelet[2900]: E0124 00:48:48.833261 2900 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.29:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:48:48.833596 containerd[1724]: time="2026-01-24T00:48:48.833565123Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:48:48.837361 containerd[1724]: time="2026-01-24T00:48:48.837312870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:48:48.840710 containerd[1724]: time="2026-01-24T00:48:48.840670811Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:48:48.842792 containerd[1724]: time="2026-01-24T00:48:48.842526334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:48:48.849464 containerd[1724]: time="2026-01-24T00:48:48.849433319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:48:48.850352 containerd[1724]: time="2026-01-24T00:48:48.850313129Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 574.345563ms" Jan 24 00:48:48.851053 kubelet[2900]: E0124 00:48:48.850974 2900 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.29:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-f1b70866be&limit=500&resourceVersion=0\": dial tcp 10.200.4.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:48:48.852515 containerd[1724]: time="2026-01-24T00:48:48.852484056Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 542.38707ms" Jan 24 00:48:48.852949 containerd[1724]: time="2026-01-24T00:48:48.852919962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 581.394949ms" Jan 24 00:48:49.119164 kubelet[2900]: E0124 00:48:49.119100 2900 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.29:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-f1b70866be?timeout=10s\": dial tcp 10.200.4.29:6443: connect: connection refused" interval="1.6s" Jan 24 00:48:49.193139 kubelet[2900]: E0124 00:48:49.193100 2900 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.29:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:48:49.200027 kubelet[2900]: E0124 00:48:49.199989 2900 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.29:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.29:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:48:49.319752 kubelet[2900]: I0124 00:48:49.319330 2900 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:49.319752 kubelet[2900]: E0124 00:48:49.319708 2900 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.29:6443/api/v1/nodes\": dial tcp 10.200.4.29:6443: connect: connection refused" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:49.667813 kubelet[2900]: E0124 00:48:49.667737 2900 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.29:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.29:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:48:49.698939 containerd[1724]: time="2026-01-24T00:48:49.698521460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:48:49.698939 containerd[1724]: time="2026-01-24T00:48:49.698612662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:48:49.698939 containerd[1724]: time="2026-01-24T00:48:49.698636262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:49.698939 containerd[1724]: time="2026-01-24T00:48:49.698736963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:49.703097 containerd[1724]: time="2026-01-24T00:48:49.702881014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:48:49.703097 containerd[1724]: time="2026-01-24T00:48:49.702951115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:48:49.703097 containerd[1724]: time="2026-01-24T00:48:49.702972815Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:49.703097 containerd[1724]: time="2026-01-24T00:48:49.703055216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:49.715617 containerd[1724]: time="2026-01-24T00:48:49.715315367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:48:49.715617 containerd[1724]: time="2026-01-24T00:48:49.715368268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:48:49.715617 containerd[1724]: time="2026-01-24T00:48:49.715382468Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:49.715617 containerd[1724]: time="2026-01-24T00:48:49.715460369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:49.736370 systemd[1]: Started cri-containerd-6aba6692a4ffc1f26e3f8c19915093a4b2aa7d454676ae4e5f98a57c4ddb648d.scope - libcontainer container 6aba6692a4ffc1f26e3f8c19915093a4b2aa7d454676ae4e5f98a57c4ddb648d. Jan 24 00:48:49.742354 systemd[1]: Started cri-containerd-57a763a9ed22df4308d3bb64a3bc396c41dc44a33f51bb92441948359222ea89.scope - libcontainer container 57a763a9ed22df4308d3bb64a3bc396c41dc44a33f51bb92441948359222ea89. Jan 24 00:48:49.749065 systemd[1]: Started cri-containerd-379455b290674339ee661c057f8f996769a856cbcf13bddec6a27cfc34fdce31.scope - libcontainer container 379455b290674339ee661c057f8f996769a856cbcf13bddec6a27cfc34fdce31. Jan 24 00:48:49.842772 containerd[1724]: time="2026-01-24T00:48:49.842723934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-f1b70866be,Uid:d5e70be1203645f91d7ade6071aa8603,Namespace:kube-system,Attempt:0,} returns sandbox id \"379455b290674339ee661c057f8f996769a856cbcf13bddec6a27cfc34fdce31\"" Jan 24 00:48:49.845201 containerd[1724]: time="2026-01-24T00:48:49.845070763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-f1b70866be,Uid:7b3d1836ea2aa69dd2dd7f32983c2a10,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aba6692a4ffc1f26e3f8c19915093a4b2aa7d454676ae4e5f98a57c4ddb648d\"" Jan 24 00:48:49.845821 containerd[1724]: time="2026-01-24T00:48:49.845705770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-f1b70866be,Uid:310c07cb90666d5e9f71d5f09cdac787,Namespace:kube-system,Attempt:0,} returns sandbox id \"57a763a9ed22df4308d3bb64a3bc396c41dc44a33f51bb92441948359222ea89\"" Jan 24 00:48:49.851902 containerd[1724]: time="2026-01-24T00:48:49.851707044Z" level=info msg="CreateContainer within sandbox \"379455b290674339ee661c057f8f996769a856cbcf13bddec6a27cfc34fdce31\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:48:49.859101 containerd[1724]: time="2026-01-24T00:48:49.859074835Z" level=info msg="CreateContainer within sandbox \"6aba6692a4ffc1f26e3f8c19915093a4b2aa7d454676ae4e5f98a57c4ddb648d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:48:49.863061 containerd[1724]: time="2026-01-24T00:48:49.863033284Z" level=info msg="CreateContainer within sandbox \"57a763a9ed22df4308d3bb64a3bc396c41dc44a33f51bb92441948359222ea89\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:48:49.905173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461849847.mount: Deactivated successfully. Jan 24 00:48:49.909856 containerd[1724]: time="2026-01-24T00:48:49.909819259Z" level=info msg="CreateContainer within sandbox \"379455b290674339ee661c057f8f996769a856cbcf13bddec6a27cfc34fdce31\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ddeb21a7638aa6448b5166883361a74ec931588b8a46f3d35f4f2e6233606e77\"" Jan 24 00:48:49.910415 containerd[1724]: time="2026-01-24T00:48:49.910382566Z" level=info msg="StartContainer for \"ddeb21a7638aa6448b5166883361a74ec931588b8a46f3d35f4f2e6233606e77\"" Jan 24 00:48:49.937294 systemd[1]: Started cri-containerd-ddeb21a7638aa6448b5166883361a74ec931588b8a46f3d35f4f2e6233606e77.scope - libcontainer container ddeb21a7638aa6448b5166883361a74ec931588b8a46f3d35f4f2e6233606e77. Jan 24 00:48:49.946820 containerd[1724]: time="2026-01-24T00:48:49.946693812Z" level=info msg="CreateContainer within sandbox \"6aba6692a4ffc1f26e3f8c19915093a4b2aa7d454676ae4e5f98a57c4ddb648d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"15c371a98aab10ff77be3cdb307963bad98bbd2e0b3fde971ed4cc597660f3f2\"" Jan 24 00:48:49.948018 containerd[1724]: time="2026-01-24T00:48:49.947135618Z" level=info msg="StartContainer for \"15c371a98aab10ff77be3cdb307963bad98bbd2e0b3fde971ed4cc597660f3f2\"" Jan 24 00:48:49.956391 containerd[1724]: time="2026-01-24T00:48:49.956359531Z" level=info msg="CreateContainer within sandbox \"57a763a9ed22df4308d3bb64a3bc396c41dc44a33f51bb92441948359222ea89\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d5a5d11898cf306dd4c109036a34d31319f4a4a34d46c4c10c454ff3d8d658ff\"" Jan 24 00:48:49.957484 containerd[1724]: time="2026-01-24T00:48:49.957458545Z" level=info msg="StartContainer for \"d5a5d11898cf306dd4c109036a34d31319f4a4a34d46c4c10c454ff3d8d658ff\"" Jan 24 00:48:49.996358 systemd[1]: Started cri-containerd-15c371a98aab10ff77be3cdb307963bad98bbd2e0b3fde971ed4cc597660f3f2.scope - libcontainer container 15c371a98aab10ff77be3cdb307963bad98bbd2e0b3fde971ed4cc597660f3f2. Jan 24 00:48:50.011327 systemd[1]: Started cri-containerd-d5a5d11898cf306dd4c109036a34d31319f4a4a34d46c4c10c454ff3d8d658ff.scope - libcontainer container d5a5d11898cf306dd4c109036a34d31319f4a4a34d46c4c10c454ff3d8d658ff. Jan 24 00:48:50.025188 containerd[1724]: time="2026-01-24T00:48:50.025131977Z" level=info msg="StartContainer for \"ddeb21a7638aa6448b5166883361a74ec931588b8a46f3d35f4f2e6233606e77\" returns successfully" Jan 24 00:48:50.104631 containerd[1724]: time="2026-01-24T00:48:50.104587354Z" level=info msg="StartContainer for \"15c371a98aab10ff77be3cdb307963bad98bbd2e0b3fde971ed4cc597660f3f2\" returns successfully" Jan 24 00:48:50.104797 containerd[1724]: time="2026-01-24T00:48:50.104593354Z" level=info msg="StartContainer for \"d5a5d11898cf306dd4c109036a34d31319f4a4a34d46c4c10c454ff3d8d658ff\" returns successfully" Jan 24 00:48:50.772736 kubelet[2900]: E0124 00:48:50.772698 2900 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f1b70866be\" not found" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:50.775947 kubelet[2900]: E0124 00:48:50.775910 2900 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f1b70866be\" not found" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:50.791374 kubelet[2900]: E0124 00:48:50.791343 2900 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f1b70866be\" not found" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:50.922398 kubelet[2900]: I0124 00:48:50.922367 2900 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:51.784165 kubelet[2900]: E0124 00:48:51.782366 2900 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f1b70866be\" not found" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:51.784915 kubelet[2900]: E0124 00:48:51.784777 2900 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-f1b70866be\" not found" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:52.077448 kubelet[2900]: E0124 00:48:52.077328 2900 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-f1b70866be\" not found" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:52.149574 kubelet[2900]: I0124 00:48:52.149249 2900 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:52.202546 kubelet[2900]: I0124 00:48:52.202338 2900 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:52.217570 kubelet[2900]: E0124 00:48:52.217347 2900 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-f1b70866be\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:52.217570 kubelet[2900]: I0124 00:48:52.217380 2900 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:52.219387 kubelet[2900]: E0124 00:48:52.219216 2900 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-f1b70866be\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:52.219387 kubelet[2900]: I0124 00:48:52.219240 2900 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:52.222905 kubelet[2900]: E0124 00:48:52.222865 2900 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:52.683663 kubelet[2900]: I0124 00:48:52.683618 2900 apiserver.go:52] "Watching apiserver" Jan 24 00:48:52.704695 kubelet[2900]: I0124 00:48:52.704667 2900 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:48:52.781727 kubelet[2900]: I0124 00:48:52.781694 2900 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:52.784001 kubelet[2900]: E0124 00:48:52.783967 2900 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-f1b70866be\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:54.071710 kubelet[2900]: I0124 00:48:54.071658 2900 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:54.081838 kubelet[2900]: I0124 00:48:54.081708 2900 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 00:48:55.349893 systemd[1]: Reloading requested from client PID 3179 ('systemctl') (unit session-9.scope)... Jan 24 00:48:55.349908 systemd[1]: Reloading... Jan 24 00:48:55.461199 zram_generator::config[3219]: No configuration found. Jan 24 00:48:55.580800 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:48:55.672215 systemd[1]: Reloading finished in 321 ms. Jan 24 00:48:55.712407 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:55.731258 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:48:55.731511 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:55.737455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:55.935301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:55.945531 (kubelet)[3286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:48:55.985612 kubelet[3286]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:48:55.985612 kubelet[3286]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:48:55.986004 kubelet[3286]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:48:55.986004 kubelet[3286]: I0124 00:48:55.985766 3286 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:48:55.991611 kubelet[3286]: I0124 00:48:55.991580 3286 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 24 00:48:55.991611 kubelet[3286]: I0124 00:48:55.991601 3286 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:48:55.991825 kubelet[3286]: I0124 00:48:55.991805 3286 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:48:55.992895 kubelet[3286]: I0124 00:48:55.992872 3286 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:48:55.995231 kubelet[3286]: I0124 00:48:55.994736 3286 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:48:55.997903 kubelet[3286]: E0124 00:48:55.997872 3286 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:48:55.998017 kubelet[3286]: I0124 00:48:55.998004 3286 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:48:56.001740 kubelet[3286]: I0124 00:48:56.001710 3286 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:48:56.001951 kubelet[3286]: I0124 00:48:56.001923 3286 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:48:56.003349 kubelet[3286]: I0124 00:48:56.001946 3286 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-f1b70866be","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:48:56.003503 kubelet[3286]: I0124 00:48:56.003362 3286 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:48:56.003503 kubelet[3286]: I0124 00:48:56.003374 3286 container_manager_linux.go:303] "Creating device plugin manager" Jan 24 00:48:56.003590 kubelet[3286]: I0124 00:48:56.003526 3286 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:48:56.003939 kubelet[3286]: I0124 00:48:56.003922 3286 kubelet.go:480] "Attempting to sync node with API server" Jan 24 00:48:56.007184 kubelet[3286]: I0124 00:48:56.004116 3286 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:48:56.007184 kubelet[3286]: I0124 00:48:56.004177 3286 kubelet.go:386] "Adding apiserver pod source" Jan 24 00:48:56.007184 kubelet[3286]: I0124 00:48:56.004198 3286 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:48:56.011164 kubelet[3286]: I0124 00:48:56.009059 3286 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:48:56.013086 kubelet[3286]: I0124 00:48:56.012053 3286 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:48:56.024617 kubelet[3286]: I0124 00:48:56.024333 3286 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:48:56.025082 kubelet[3286]: I0124 00:48:56.025064 3286 server.go:1289] "Started kubelet" Jan 24 00:48:56.026415 kubelet[3286]: I0124 00:48:56.026139 3286 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:48:56.027066 kubelet[3286]: I0124 00:48:56.027043 3286 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:48:56.027247 kubelet[3286]: I0124 00:48:56.027200 3286 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:48:56.028451 kubelet[3286]: I0124 00:48:56.028397 3286 server.go:317] "Adding debug handlers to kubelet server" Jan 24 00:48:56.028888 kubelet[3286]: I0124 00:48:56.028859 3286 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:48:56.029635 kubelet[3286]: I0124 00:48:56.029614 3286 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:48:56.031860 kubelet[3286]: I0124 00:48:56.031844 3286 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:48:56.034632 kubelet[3286]: I0124 00:48:56.031880 3286 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:48:56.035401 kubelet[3286]: I0124 00:48:56.035091 3286 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:48:56.037293 kubelet[3286]: I0124 00:48:56.037265 3286 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:48:56.037398 kubelet[3286]: I0124 00:48:56.037376 3286 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:48:56.041624 kubelet[3286]: E0124 00:48:56.041443 3286 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:48:56.049062 kubelet[3286]: I0124 00:48:56.049044 3286 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:48:56.062882 kubelet[3286]: I0124 00:48:56.062848 3286 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 24 00:48:56.064121 kubelet[3286]: I0124 00:48:56.064100 3286 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 24 00:48:56.064421 kubelet[3286]: I0124 00:48:56.064166 3286 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 24 00:48:56.064421 kubelet[3286]: I0124 00:48:56.064190 3286 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:48:56.064421 kubelet[3286]: I0124 00:48:56.064199 3286 kubelet.go:2436] "Starting kubelet main sync loop" Jan 24 00:48:56.064421 kubelet[3286]: E0124 00:48:56.064260 3286 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:48:56.099948 kubelet[3286]: I0124 00:48:56.099921 3286 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:48:56.099948 kubelet[3286]: I0124 00:48:56.099937 3286 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:48:56.099948 kubelet[3286]: I0124 00:48:56.099958 3286 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:48:56.100189 kubelet[3286]: I0124 00:48:56.100095 3286 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:48:56.100189 kubelet[3286]: I0124 00:48:56.100107 3286 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:48:56.100189 kubelet[3286]: I0124 00:48:56.100128 3286 policy_none.go:49] "None policy: Start" Jan 24 00:48:56.100189 kubelet[3286]: I0124 00:48:56.100140 3286 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:48:56.100189 kubelet[3286]: I0124 00:48:56.100181 3286 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:48:56.100388 kubelet[3286]: I0124 00:48:56.100298 3286 state_mem.go:75] "Updated machine memory state" Jan 24 00:48:56.103774 kubelet[3286]: E0124 00:48:56.103607 3286 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:48:56.103774 kubelet[3286]: I0124 00:48:56.103770 3286 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:48:56.104137 kubelet[3286]: I0124 00:48:56.103782 3286 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:48:56.104137 kubelet[3286]: I0124 00:48:56.103959 3286 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:48:56.108169 kubelet[3286]: E0124 00:48:56.107269 3286 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:48:56.165647 kubelet[3286]: I0124 00:48:56.165543 3286 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.167048 kubelet[3286]: I0124 00:48:56.165962 3286 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.167226 kubelet[3286]: I0124 00:48:56.165542 3286 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.185678 kubelet[3286]: I0124 00:48:56.185645 3286 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 00:48:56.185927 kubelet[3286]: I0124 00:48:56.185742 3286 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 00:48:56.186194 kubelet[3286]: I0124 00:48:56.186009 3286 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 00:48:56.186194 kubelet[3286]: E0124 00:48:56.186066 3286 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.208842 kubelet[3286]: I0124 00:48:56.208748 3286 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.222310 kubelet[3286]: I0124 00:48:56.222283 3286 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.222429 kubelet[3286]: I0124 00:48:56.222354 3286 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.336960 kubelet[3286]: I0124 00:48:56.336909 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d5e70be1203645f91d7ade6071aa8603-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" (UID: \"d5e70be1203645f91d7ade6071aa8603\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.336960 kubelet[3286]: I0124 00:48:56.336962 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d5e70be1203645f91d7ade6071aa8603-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" (UID: \"d5e70be1203645f91d7ade6071aa8603\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.337194 kubelet[3286]: I0124 00:48:56.336984 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/310c07cb90666d5e9f71d5f09cdac787-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f1b70866be\" (UID: \"310c07cb90666d5e9f71d5f09cdac787\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.337194 kubelet[3286]: I0124 00:48:56.337006 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/310c07cb90666d5e9f71d5f09cdac787-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-f1b70866be\" (UID: \"310c07cb90666d5e9f71d5f09cdac787\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.337194 kubelet[3286]: I0124 00:48:56.337027 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/310c07cb90666d5e9f71d5f09cdac787-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-f1b70866be\" (UID: \"310c07cb90666d5e9f71d5f09cdac787\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.337194 kubelet[3286]: I0124 00:48:56.337047 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d5e70be1203645f91d7ade6071aa8603-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" (UID: \"d5e70be1203645f91d7ade6071aa8603\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.337194 kubelet[3286]: I0124 00:48:56.337069 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d5e70be1203645f91d7ade6071aa8603-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" (UID: \"d5e70be1203645f91d7ade6071aa8603\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.337337 kubelet[3286]: I0124 00:48:56.337088 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5e70be1203645f91d7ade6071aa8603-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-f1b70866be\" (UID: \"d5e70be1203645f91d7ade6071aa8603\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:56.337337 kubelet[3286]: I0124 00:48:56.337110 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b3d1836ea2aa69dd2dd7f32983c2a10-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-f1b70866be\" (UID: \"7b3d1836ea2aa69dd2dd7f32983c2a10\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:57.008994 kubelet[3286]: I0124 00:48:57.008934 3286 apiserver.go:52] "Watching apiserver" Jan 24 00:48:57.035778 kubelet[3286]: I0124 00:48:57.035739 3286 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:48:57.088109 kubelet[3286]: I0124 00:48:57.084223 3286 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:57.088109 kubelet[3286]: I0124 00:48:57.084569 3286 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:57.097072 kubelet[3286]: I0124 00:48:57.097038 3286 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 00:48:57.097318 kubelet[3286]: E0124 00:48:57.097287 3286 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-f1b70866be\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:57.098756 kubelet[3286]: I0124 00:48:57.098617 3286 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 00:48:57.098847 kubelet[3286]: E0124 00:48:57.098784 3286 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-f1b70866be\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f1b70866be" Jan 24 00:48:57.133547 kubelet[3286]: I0124 00:48:57.133428 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-f1b70866be" podStartSLOduration=1.1334072530000001 podStartE2EDuration="1.133407253s" podCreationTimestamp="2026-01-24 00:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:48:57.118846876 +0000 UTC m=+1.168991532" watchObservedRunningTime="2026-01-24 00:48:57.133407253 +0000 UTC m=+1.183551909" Jan 24 00:48:57.134977 kubelet[3286]: I0124 00:48:57.134198 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-f1b70866be" podStartSLOduration=3.134184162 podStartE2EDuration="3.134184162s" podCreationTimestamp="2026-01-24 00:48:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:48:57.13319495 +0000 UTC m=+1.183339506" watchObservedRunningTime="2026-01-24 00:48:57.134184162 +0000 UTC m=+1.184328818" Jan 24 00:48:57.148186 kubelet[3286]: I0124 00:48:57.147447 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-f1b70866be" podStartSLOduration=1.147428022 podStartE2EDuration="1.147428022s" podCreationTimestamp="2026-01-24 00:48:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:48:57.146045105 +0000 UTC m=+1.196189761" watchObservedRunningTime="2026-01-24 00:48:57.147428022 +0000 UTC m=+1.197572678" Jan 24 00:49:01.189307 kubelet[3286]: I0124 00:49:01.189264 3286 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:49:01.190041 containerd[1724]: time="2026-01-24T00:49:01.189768489Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:49:01.190499 kubelet[3286]: I0124 00:49:01.190038 3286 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:49:01.906026 systemd[1]: Created slice kubepods-besteffort-pod7cc34900_32b1_448a_9370_b97f6eb396ff.slice - libcontainer container kubepods-besteffort-pod7cc34900_32b1_448a_9370_b97f6eb396ff.slice. Jan 24 00:49:01.975318 kubelet[3286]: I0124 00:49:01.975277 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8qbq\" (UniqueName: \"kubernetes.io/projected/7cc34900-32b1-448a-9370-b97f6eb396ff-kube-api-access-j8qbq\") pod \"kube-proxy-645dm\" (UID: \"7cc34900-32b1-448a-9370-b97f6eb396ff\") " pod="kube-system/kube-proxy-645dm" Jan 24 00:49:01.975318 kubelet[3286]: I0124 00:49:01.975322 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7cc34900-32b1-448a-9370-b97f6eb396ff-xtables-lock\") pod \"kube-proxy-645dm\" (UID: \"7cc34900-32b1-448a-9370-b97f6eb396ff\") " pod="kube-system/kube-proxy-645dm" Jan 24 00:49:01.975318 kubelet[3286]: I0124 00:49:01.975348 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7cc34900-32b1-448a-9370-b97f6eb396ff-kube-proxy\") pod \"kube-proxy-645dm\" (UID: \"7cc34900-32b1-448a-9370-b97f6eb396ff\") " pod="kube-system/kube-proxy-645dm" Jan 24 00:49:01.975318 kubelet[3286]: I0124 00:49:01.975367 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7cc34900-32b1-448a-9370-b97f6eb396ff-lib-modules\") pod \"kube-proxy-645dm\" (UID: \"7cc34900-32b1-448a-9370-b97f6eb396ff\") " pod="kube-system/kube-proxy-645dm" Jan 24 00:49:02.195770 systemd[1]: Created slice kubepods-besteffort-podd0292293_9d07_42f1_b502_8b445446bc93.slice - libcontainer container kubepods-besteffort-podd0292293_9d07_42f1_b502_8b445446bc93.slice. Jan 24 00:49:02.216864 containerd[1724]: time="2026-01-24T00:49:02.216825509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-645dm,Uid:7cc34900-32b1-448a-9370-b97f6eb396ff,Namespace:kube-system,Attempt:0,}" Jan 24 00:49:02.262993 containerd[1724]: time="2026-01-24T00:49:02.262881368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:02.263198 containerd[1724]: time="2026-01-24T00:49:02.262958469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:02.263198 containerd[1724]: time="2026-01-24T00:49:02.262979269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:02.263198 containerd[1724]: time="2026-01-24T00:49:02.263125571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:02.277595 kubelet[3286]: I0124 00:49:02.277556 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg75p\" (UniqueName: \"kubernetes.io/projected/d0292293-9d07-42f1-b502-8b445446bc93-kube-api-access-xg75p\") pod \"tigera-operator-7dcd859c48-xhmsf\" (UID: \"d0292293-9d07-42f1-b502-8b445446bc93\") " pod="tigera-operator/tigera-operator-7dcd859c48-xhmsf" Jan 24 00:49:02.278094 kubelet[3286]: I0124 00:49:02.278068 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0292293-9d07-42f1-b502-8b445446bc93-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xhmsf\" (UID: \"d0292293-9d07-42f1-b502-8b445446bc93\") " pod="tigera-operator/tigera-operator-7dcd859c48-xhmsf" Jan 24 00:49:02.287322 systemd[1]: Started cri-containerd-f0ee00e4e01d946b517866a37799f5c3461e6ae88b7580c08883c6b018fc0c16.scope - libcontainer container f0ee00e4e01d946b517866a37799f5c3461e6ae88b7580c08883c6b018fc0c16. Jan 24 00:49:02.307735 containerd[1724]: time="2026-01-24T00:49:02.307289706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-645dm,Uid:7cc34900-32b1-448a-9370-b97f6eb396ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"f0ee00e4e01d946b517866a37799f5c3461e6ae88b7580c08883c6b018fc0c16\"" Jan 24 00:49:02.317666 containerd[1724]: time="2026-01-24T00:49:02.317481129Z" level=info msg="CreateContainer within sandbox \"f0ee00e4e01d946b517866a37799f5c3461e6ae88b7580c08883c6b018fc0c16\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:49:02.375027 containerd[1724]: time="2026-01-24T00:49:02.374977726Z" level=info msg="CreateContainer within sandbox \"f0ee00e4e01d946b517866a37799f5c3461e6ae88b7580c08883c6b018fc0c16\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0a1756b31a04ac7ae25530643e757c8cb90dcdceaeb221bb8fca09a62b993385\"" Jan 24 00:49:02.377186 containerd[1724]: time="2026-01-24T00:49:02.375803136Z" level=info msg="StartContainer for \"0a1756b31a04ac7ae25530643e757c8cb90dcdceaeb221bb8fca09a62b993385\"" Jan 24 00:49:02.404331 systemd[1]: Started cri-containerd-0a1756b31a04ac7ae25530643e757c8cb90dcdceaeb221bb8fca09a62b993385.scope - libcontainer container 0a1756b31a04ac7ae25530643e757c8cb90dcdceaeb221bb8fca09a62b993385. Jan 24 00:49:02.438029 containerd[1724]: time="2026-01-24T00:49:02.437944590Z" level=info msg="StartContainer for \"0a1756b31a04ac7ae25530643e757c8cb90dcdceaeb221bb8fca09a62b993385\" returns successfully" Jan 24 00:49:02.505649 containerd[1724]: time="2026-01-24T00:49:02.505034203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xhmsf,Uid:d0292293-9d07-42f1-b502-8b445446bc93,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:49:02.565452 containerd[1724]: time="2026-01-24T00:49:02.565328434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:02.566760 containerd[1724]: time="2026-01-24T00:49:02.565725239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:02.567616 containerd[1724]: time="2026-01-24T00:49:02.567543761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:02.569404 containerd[1724]: time="2026-01-24T00:49:02.569362183Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:02.597559 systemd[1]: Started cri-containerd-108e6f7f97640fc459bf8e322fbda8bd62fcbb138aab72b8db3cdea425cbb114.scope - libcontainer container 108e6f7f97640fc459bf8e322fbda8bd62fcbb138aab72b8db3cdea425cbb114. Jan 24 00:49:02.640554 containerd[1724]: time="2026-01-24T00:49:02.640483145Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xhmsf,Uid:d0292293-9d07-42f1-b502-8b445446bc93,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"108e6f7f97640fc459bf8e322fbda8bd62fcbb138aab72b8db3cdea425cbb114\"" Jan 24 00:49:02.642801 containerd[1724]: time="2026-01-24T00:49:02.642116365Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:49:03.121220 kubelet[3286]: I0124 00:49:03.121160 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-645dm" podStartSLOduration=2.121127871 podStartE2EDuration="2.121127871s" podCreationTimestamp="2026-01-24 00:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:49:03.121072471 +0000 UTC m=+7.171217027" watchObservedRunningTime="2026-01-24 00:49:03.121127871 +0000 UTC m=+7.171272427" Jan 24 00:49:04.256529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2387782453.mount: Deactivated successfully. Jan 24 00:49:04.926456 containerd[1724]: time="2026-01-24T00:49:04.926402455Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:04.929165 containerd[1724]: time="2026-01-24T00:49:04.929014087Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:49:04.932215 containerd[1724]: time="2026-01-24T00:49:04.932180425Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:04.938089 containerd[1724]: time="2026-01-24T00:49:04.936449377Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:04.938089 containerd[1724]: time="2026-01-24T00:49:04.937547890Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.295396525s" Jan 24 00:49:04.938089 containerd[1724]: time="2026-01-24T00:49:04.937582490Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:49:04.947494 containerd[1724]: time="2026-01-24T00:49:04.947460310Z" level=info msg="CreateContainer within sandbox \"108e6f7f97640fc459bf8e322fbda8bd62fcbb138aab72b8db3cdea425cbb114\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:49:04.984931 containerd[1724]: time="2026-01-24T00:49:04.984889964Z" level=info msg="CreateContainer within sandbox \"108e6f7f97640fc459bf8e322fbda8bd62fcbb138aab72b8db3cdea425cbb114\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"5e3e0fad9449bbebc15ee7625571e1c8d4a973a1a42ad6c71d6ca21447d9f9d1\"" Jan 24 00:49:04.985567 containerd[1724]: time="2026-01-24T00:49:04.985540072Z" level=info msg="StartContainer for \"5e3e0fad9449bbebc15ee7625571e1c8d4a973a1a42ad6c71d6ca21447d9f9d1\"" Jan 24 00:49:05.028378 systemd[1]: Started cri-containerd-5e3e0fad9449bbebc15ee7625571e1c8d4a973a1a42ad6c71d6ca21447d9f9d1.scope - libcontainer container 5e3e0fad9449bbebc15ee7625571e1c8d4a973a1a42ad6c71d6ca21447d9f9d1. Jan 24 00:49:05.058013 containerd[1724]: time="2026-01-24T00:49:05.057955250Z" level=info msg="StartContainer for \"5e3e0fad9449bbebc15ee7625571e1c8d4a973a1a42ad6c71d6ca21447d9f9d1\" returns successfully" Jan 24 00:49:06.503491 kubelet[3286]: I0124 00:49:06.502630 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xhmsf" podStartSLOduration=2.205287714 podStartE2EDuration="4.502608762s" podCreationTimestamp="2026-01-24 00:49:02 +0000 UTC" firstStartedPulling="2026-01-24 00:49:02.641775661 +0000 UTC m=+6.691920317" lastFinishedPulling="2026-01-24 00:49:04.939096809 +0000 UTC m=+8.989241365" observedRunningTime="2026-01-24 00:49:05.122250829 +0000 UTC m=+9.172395485" watchObservedRunningTime="2026-01-24 00:49:06.502608762 +0000 UTC m=+10.552753318" Jan 24 00:49:11.526626 sudo[2378]: pam_unix(sudo:session): session closed for user root Jan 24 00:49:11.625057 sshd[2375]: pam_unix(sshd:session): session closed for user core Jan 24 00:49:11.630498 systemd-logind[1708]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:49:11.633565 systemd[1]: sshd@6-10.200.4.29:22-10.200.16.10:47622.service: Deactivated successfully. Jan 24 00:49:11.636931 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:49:11.637266 systemd[1]: session-9.scope: Consumed 4.405s CPU time, 156.4M memory peak, 0B memory swap peak. Jan 24 00:49:11.638876 systemd-logind[1708]: Removed session 9. Jan 24 00:49:16.944232 systemd[1]: Created slice kubepods-besteffort-pod19d37127_2307_4fa2_b936_dcf90a9548aa.slice - libcontainer container kubepods-besteffort-pod19d37127_2307_4fa2_b936_dcf90a9548aa.slice. Jan 24 00:49:16.979476 kubelet[3286]: I0124 00:49:16.979336 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hthl\" (UniqueName: \"kubernetes.io/projected/19d37127-2307-4fa2-b936-dcf90a9548aa-kube-api-access-6hthl\") pod \"calico-typha-58c895ddc8-mnppp\" (UID: \"19d37127-2307-4fa2-b936-dcf90a9548aa\") " pod="calico-system/calico-typha-58c895ddc8-mnppp" Jan 24 00:49:16.979476 kubelet[3286]: I0124 00:49:16.979384 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19d37127-2307-4fa2-b936-dcf90a9548aa-tigera-ca-bundle\") pod \"calico-typha-58c895ddc8-mnppp\" (UID: \"19d37127-2307-4fa2-b936-dcf90a9548aa\") " pod="calico-system/calico-typha-58c895ddc8-mnppp" Jan 24 00:49:16.979476 kubelet[3286]: I0124 00:49:16.979406 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/19d37127-2307-4fa2-b936-dcf90a9548aa-typha-certs\") pod \"calico-typha-58c895ddc8-mnppp\" (UID: \"19d37127-2307-4fa2-b936-dcf90a9548aa\") " pod="calico-system/calico-typha-58c895ddc8-mnppp" Jan 24 00:49:17.138854 systemd[1]: Created slice kubepods-besteffort-poda3f62309_2382_4868_9fba_e2b582639c56.slice - libcontainer container kubepods-besteffort-poda3f62309_2382_4868_9fba_e2b582639c56.slice. Jan 24 00:49:17.181121 kubelet[3286]: I0124 00:49:17.181066 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a3f62309-2382-4868-9fba-e2b582639c56-cni-log-dir\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.181121 kubelet[3286]: I0124 00:49:17.181124 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a3f62309-2382-4868-9fba-e2b582639c56-cni-net-dir\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.181547 kubelet[3286]: I0124 00:49:17.181163 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a3f62309-2382-4868-9fba-e2b582639c56-node-certs\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.181547 kubelet[3286]: I0124 00:49:17.181191 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a3f62309-2382-4868-9fba-e2b582639c56-xtables-lock\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.181547 kubelet[3286]: I0124 00:49:17.181218 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a3f62309-2382-4868-9fba-e2b582639c56-cni-bin-dir\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.181547 kubelet[3286]: I0124 00:49:17.181242 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a3f62309-2382-4868-9fba-e2b582639c56-var-lib-calico\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.181547 kubelet[3286]: I0124 00:49:17.181285 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a3f62309-2382-4868-9fba-e2b582639c56-policysync\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.181718 kubelet[3286]: I0124 00:49:17.181324 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a3f62309-2382-4868-9fba-e2b582639c56-lib-modules\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.181718 kubelet[3286]: I0124 00:49:17.181356 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a3f62309-2382-4868-9fba-e2b582639c56-flexvol-driver-host\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.181718 kubelet[3286]: I0124 00:49:17.181382 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a3f62309-2382-4868-9fba-e2b582639c56-var-run-calico\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.181718 kubelet[3286]: I0124 00:49:17.181408 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2489\" (UniqueName: \"kubernetes.io/projected/a3f62309-2382-4868-9fba-e2b582639c56-kube-api-access-j2489\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.181718 kubelet[3286]: I0124 00:49:17.181437 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3f62309-2382-4868-9fba-e2b582639c56-tigera-ca-bundle\") pod \"calico-node-92qft\" (UID: \"a3f62309-2382-4868-9fba-e2b582639c56\") " pod="calico-system/calico-node-92qft" Jan 24 00:49:17.249828 containerd[1724]: time="2026-01-24T00:49:17.249708170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c895ddc8-mnppp,Uid:19d37127-2307-4fa2-b936-dcf90a9548aa,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:17.291277 kubelet[3286]: E0124 00:49:17.287815 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.291277 kubelet[3286]: W0124 00:49:17.287842 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.291277 kubelet[3286]: E0124 00:49:17.287867 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.291277 kubelet[3286]: E0124 00:49:17.289233 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.291277 kubelet[3286]: W0124 00:49:17.289247 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.291277 kubelet[3286]: E0124 00:49:17.289278 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.291277 kubelet[3286]: E0124 00:49:17.289495 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.291277 kubelet[3286]: W0124 00:49:17.289504 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.291277 kubelet[3286]: E0124 00:49:17.289515 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.291897 kubelet[3286]: E0124 00:49:17.291535 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.291897 kubelet[3286]: W0124 00:49:17.291549 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.291897 kubelet[3286]: E0124 00:49:17.291565 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.324912 kubelet[3286]: E0124 00:49:17.324883 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.324912 kubelet[3286]: W0124 00:49:17.324909 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.325138 kubelet[3286]: E0124 00:49:17.324931 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.326806 kubelet[3286]: E0124 00:49:17.326277 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.326806 kubelet[3286]: W0124 00:49:17.326290 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.326806 kubelet[3286]: E0124 00:49:17.326306 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.327638 containerd[1724]: time="2026-01-24T00:49:17.326578017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:17.327638 containerd[1724]: time="2026-01-24T00:49:17.326645718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:17.327638 containerd[1724]: time="2026-01-24T00:49:17.326681219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:17.327638 containerd[1724]: time="2026-01-24T00:49:17.326764820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:17.338678 kubelet[3286]: E0124 00:49:17.337242 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:49:17.359215 kubelet[3286]: E0124 00:49:17.359190 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.359215 kubelet[3286]: W0124 00:49:17.359216 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.359366 kubelet[3286]: E0124 00:49:17.359242 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.360182 kubelet[3286]: E0124 00:49:17.360160 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.360182 kubelet[3286]: W0124 00:49:17.360179 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.360310 kubelet[3286]: E0124 00:49:17.360196 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.361741 kubelet[3286]: E0124 00:49:17.361714 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.361741 kubelet[3286]: W0124 00:49:17.361729 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.361892 kubelet[3286]: E0124 00:49:17.361871 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.362908 kubelet[3286]: E0124 00:49:17.362783 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.362908 kubelet[3286]: W0124 00:49:17.362798 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.362908 kubelet[3286]: E0124 00:49:17.362813 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.362864 systemd[1]: Started cri-containerd-99ed94c47412461fb060c2410502a6a29676afcc77d16c17b415a8b38882c6ec.scope - libcontainer container 99ed94c47412461fb060c2410502a6a29676afcc77d16c17b415a8b38882c6ec. Jan 24 00:49:17.364238 kubelet[3286]: E0124 00:49:17.363379 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.364238 kubelet[3286]: W0124 00:49:17.363396 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.364238 kubelet[3286]: E0124 00:49:17.364198 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.365061 kubelet[3286]: E0124 00:49:17.364913 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.365061 kubelet[3286]: W0124 00:49:17.364929 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.365061 kubelet[3286]: E0124 00:49:17.364943 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.365370 kubelet[3286]: E0124 00:49:17.365250 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.365370 kubelet[3286]: W0124 00:49:17.365266 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.365370 kubelet[3286]: E0124 00:49:17.365293 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.365626 kubelet[3286]: E0124 00:49:17.365535 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.365626 kubelet[3286]: W0124 00:49:17.365549 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.365626 kubelet[3286]: E0124 00:49:17.365576 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.365950 kubelet[3286]: E0124 00:49:17.365893 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.365950 kubelet[3286]: W0124 00:49:17.365906 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.366191 kubelet[3286]: E0124 00:49:17.365919 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.367112 kubelet[3286]: E0124 00:49:17.367091 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.367216 kubelet[3286]: W0124 00:49:17.367119 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.367216 kubelet[3286]: E0124 00:49:17.367133 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.367429 kubelet[3286]: E0124 00:49:17.367380 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.367429 kubelet[3286]: W0124 00:49:17.367407 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.367429 kubelet[3286]: E0124 00:49:17.367421 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.368105 kubelet[3286]: E0124 00:49:17.367973 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.368105 kubelet[3286]: W0124 00:49:17.367986 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.368105 kubelet[3286]: E0124 00:49:17.368000 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.368703 kubelet[3286]: E0124 00:49:17.368672 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.368821 kubelet[3286]: W0124 00:49:17.368690 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.368821 kubelet[3286]: E0124 00:49:17.368816 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.370194 kubelet[3286]: E0124 00:49:17.369408 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.370194 kubelet[3286]: W0124 00:49:17.369423 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.370194 kubelet[3286]: E0124 00:49:17.369435 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.370924 kubelet[3286]: E0124 00:49:17.370822 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.370924 kubelet[3286]: W0124 00:49:17.370838 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.370924 kubelet[3286]: E0124 00:49:17.370852 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.371798 kubelet[3286]: E0124 00:49:17.371732 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.371798 kubelet[3286]: W0124 00:49:17.371770 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.371798 kubelet[3286]: E0124 00:49:17.371785 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.372948 kubelet[3286]: E0124 00:49:17.372008 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.372948 kubelet[3286]: W0124 00:49:17.372021 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.372948 kubelet[3286]: E0124 00:49:17.372035 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.373262 kubelet[3286]: E0124 00:49:17.373005 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.373262 kubelet[3286]: W0124 00:49:17.373032 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.373262 kubelet[3286]: E0124 00:49:17.373046 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.373734 kubelet[3286]: E0124 00:49:17.373533 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.373734 kubelet[3286]: W0124 00:49:17.373548 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.373734 kubelet[3286]: E0124 00:49:17.373561 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.374627 kubelet[3286]: E0124 00:49:17.374386 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.374627 kubelet[3286]: W0124 00:49:17.374403 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.374627 kubelet[3286]: E0124 00:49:17.374416 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.383692 kubelet[3286]: E0124 00:49:17.383669 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.383692 kubelet[3286]: W0124 00:49:17.383690 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.383926 kubelet[3286]: E0124 00:49:17.383706 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.384117 kubelet[3286]: I0124 00:49:17.383794 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4kj5\" (UniqueName: \"kubernetes.io/projected/11ebe218-698b-4f81-b5c6-5227731a6439-kube-api-access-q4kj5\") pod \"csi-node-driver-hw7t4\" (UID: \"11ebe218-698b-4f81-b5c6-5227731a6439\") " pod="calico-system/csi-node-driver-hw7t4" Jan 24 00:49:17.384605 kubelet[3286]: E0124 00:49:17.384580 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.384605 kubelet[3286]: W0124 00:49:17.384603 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.384728 kubelet[3286]: E0124 00:49:17.384619 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.385111 kubelet[3286]: E0124 00:49:17.384867 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.385111 kubelet[3286]: W0124 00:49:17.384882 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.385111 kubelet[3286]: E0124 00:49:17.384896 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.385805 kubelet[3286]: E0124 00:49:17.385589 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.385805 kubelet[3286]: W0124 00:49:17.385605 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.385805 kubelet[3286]: E0124 00:49:17.385618 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.385805 kubelet[3286]: I0124 00:49:17.385657 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/11ebe218-698b-4f81-b5c6-5227731a6439-registration-dir\") pod \"csi-node-driver-hw7t4\" (UID: \"11ebe218-698b-4f81-b5c6-5227731a6439\") " pod="calico-system/csi-node-driver-hw7t4" Jan 24 00:49:17.386246 kubelet[3286]: E0124 00:49:17.386213 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.386246 kubelet[3286]: W0124 00:49:17.386231 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.386246 kubelet[3286]: E0124 00:49:17.386244 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.386399 kubelet[3286]: I0124 00:49:17.386284 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/11ebe218-698b-4f81-b5c6-5227731a6439-kubelet-dir\") pod \"csi-node-driver-hw7t4\" (UID: \"11ebe218-698b-4f81-b5c6-5227731a6439\") " pod="calico-system/csi-node-driver-hw7t4" Jan 24 00:49:17.386532 kubelet[3286]: E0124 00:49:17.386516 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.386532 kubelet[3286]: W0124 00:49:17.386529 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.386636 kubelet[3286]: E0124 00:49:17.386543 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.386636 kubelet[3286]: I0124 00:49:17.386577 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/11ebe218-698b-4f81-b5c6-5227731a6439-varrun\") pod \"csi-node-driver-hw7t4\" (UID: \"11ebe218-698b-4f81-b5c6-5227731a6439\") " pod="calico-system/csi-node-driver-hw7t4" Jan 24 00:49:17.387052 kubelet[3286]: E0124 00:49:17.386805 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.387052 kubelet[3286]: W0124 00:49:17.386842 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.387052 kubelet[3286]: E0124 00:49:17.386857 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.387547 kubelet[3286]: E0124 00:49:17.387492 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.387547 kubelet[3286]: W0124 00:49:17.387508 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.387865 kubelet[3286]: E0124 00:49:17.387522 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.388193 kubelet[3286]: E0124 00:49:17.388178 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.388350 kubelet[3286]: W0124 00:49:17.388296 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.388350 kubelet[3286]: E0124 00:49:17.388334 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.389042 kubelet[3286]: E0124 00:49:17.388910 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.389042 kubelet[3286]: W0124 00:49:17.388934 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.389042 kubelet[3286]: E0124 00:49:17.388948 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.389571 kubelet[3286]: E0124 00:49:17.389557 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.389926 kubelet[3286]: W0124 00:49:17.389806 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.389926 kubelet[3286]: E0124 00:49:17.389837 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.390294 kubelet[3286]: I0124 00:49:17.390068 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/11ebe218-698b-4f81-b5c6-5227731a6439-socket-dir\") pod \"csi-node-driver-hw7t4\" (UID: \"11ebe218-698b-4f81-b5c6-5227731a6439\") " pod="calico-system/csi-node-driver-hw7t4" Jan 24 00:49:17.390699 kubelet[3286]: E0124 00:49:17.390565 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.390699 kubelet[3286]: W0124 00:49:17.390580 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.390699 kubelet[3286]: E0124 00:49:17.390593 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.391098 kubelet[3286]: E0124 00:49:17.391037 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.391098 kubelet[3286]: W0124 00:49:17.391062 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.391098 kubelet[3286]: E0124 00:49:17.391076 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.391877 kubelet[3286]: E0124 00:49:17.391749 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.391877 kubelet[3286]: W0124 00:49:17.391764 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.391877 kubelet[3286]: E0124 00:49:17.391778 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.392333 kubelet[3286]: E0124 00:49:17.392236 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.392333 kubelet[3286]: W0124 00:49:17.392273 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.392333 kubelet[3286]: E0124 00:49:17.392308 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.442841 containerd[1724]: time="2026-01-24T00:49:17.442782349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-92qft,Uid:a3f62309-2382-4868-9fba-e2b582639c56,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:17.466841 containerd[1724]: time="2026-01-24T00:49:17.466788945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-58c895ddc8-mnppp,Uid:19d37127-2307-4fa2-b936-dcf90a9548aa,Namespace:calico-system,Attempt:0,} returns sandbox id \"99ed94c47412461fb060c2410502a6a29676afcc77d16c17b415a8b38882c6ec\"" Jan 24 00:49:17.469981 containerd[1724]: time="2026-01-24T00:49:17.469754382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:49:17.490053 containerd[1724]: time="2026-01-24T00:49:17.484178059Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:17.490053 containerd[1724]: time="2026-01-24T00:49:17.484239860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:17.490053 containerd[1724]: time="2026-01-24T00:49:17.484249760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:17.490053 containerd[1724]: time="2026-01-24T00:49:17.484327761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:17.492476 kubelet[3286]: E0124 00:49:17.491105 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.492476 kubelet[3286]: W0124 00:49:17.491130 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.492476 kubelet[3286]: E0124 00:49:17.491190 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.492476 kubelet[3286]: E0124 00:49:17.491868 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.492476 kubelet[3286]: W0124 00:49:17.491990 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.492476 kubelet[3286]: E0124 00:49:17.492012 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.492855 kubelet[3286]: E0124 00:49:17.492718 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.492855 kubelet[3286]: W0124 00:49:17.492740 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.492855 kubelet[3286]: E0124 00:49:17.492830 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.493412 kubelet[3286]: E0124 00:49:17.493393 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.493610 kubelet[3286]: W0124 00:49:17.493594 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.493808 kubelet[3286]: E0124 00:49:17.493700 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.494109 kubelet[3286]: E0124 00:49:17.493999 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.494109 kubelet[3286]: W0124 00:49:17.494013 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.494109 kubelet[3286]: E0124 00:49:17.494027 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.494629 kubelet[3286]: E0124 00:49:17.494477 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.494629 kubelet[3286]: W0124 00:49:17.494492 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.494629 kubelet[3286]: E0124 00:49:17.494506 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.495043 kubelet[3286]: E0124 00:49:17.494918 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.495043 kubelet[3286]: W0124 00:49:17.494932 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.495043 kubelet[3286]: E0124 00:49:17.494946 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.495509 kubelet[3286]: E0124 00:49:17.495384 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.495509 kubelet[3286]: W0124 00:49:17.495399 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.495509 kubelet[3286]: E0124 00:49:17.495412 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.495985 kubelet[3286]: E0124 00:49:17.495800 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.495985 kubelet[3286]: W0124 00:49:17.495815 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.495985 kubelet[3286]: E0124 00:49:17.495828 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.496411 kubelet[3286]: E0124 00:49:17.496276 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.496411 kubelet[3286]: W0124 00:49:17.496293 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.496411 kubelet[3286]: E0124 00:49:17.496307 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.496898 kubelet[3286]: E0124 00:49:17.496726 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.496898 kubelet[3286]: W0124 00:49:17.496741 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.496898 kubelet[3286]: E0124 00:49:17.496775 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.497401 kubelet[3286]: E0124 00:49:17.497201 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.497401 kubelet[3286]: W0124 00:49:17.497217 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.497401 kubelet[3286]: E0124 00:49:17.497230 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.498560 kubelet[3286]: E0124 00:49:17.498280 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.498560 kubelet[3286]: W0124 00:49:17.498291 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.498560 kubelet[3286]: E0124 00:49:17.498304 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.498807 kubelet[3286]: E0124 00:49:17.498792 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.498918 kubelet[3286]: W0124 00:49:17.498884 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.498918 kubelet[3286]: E0124 00:49:17.498904 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.499299 kubelet[3286]: E0124 00:49:17.499282 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.499439 kubelet[3286]: W0124 00:49:17.499393 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.499439 kubelet[3286]: E0124 00:49:17.499412 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.500794 kubelet[3286]: E0124 00:49:17.500227 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.500794 kubelet[3286]: W0124 00:49:17.500245 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.500794 kubelet[3286]: E0124 00:49:17.500262 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.509944 kubelet[3286]: E0124 00:49:17.509743 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.509944 kubelet[3286]: W0124 00:49:17.509762 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.509944 kubelet[3286]: E0124 00:49:17.509779 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.510554 kubelet[3286]: E0124 00:49:17.510540 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.511495 kubelet[3286]: W0124 00:49:17.511476 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.511591 kubelet[3286]: E0124 00:49:17.511578 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.515814 kubelet[3286]: E0124 00:49:17.515797 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.516014 kubelet[3286]: W0124 00:49:17.515999 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.516158 kubelet[3286]: E0124 00:49:17.516112 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.516858 kubelet[3286]: E0124 00:49:17.516493 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.516858 kubelet[3286]: W0124 00:49:17.516640 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.516858 kubelet[3286]: E0124 00:49:17.516661 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.519268 kubelet[3286]: E0124 00:49:17.517597 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.519268 kubelet[3286]: W0124 00:49:17.519210 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.519268 kubelet[3286]: E0124 00:49:17.519238 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.520126 kubelet[3286]: E0124 00:49:17.520111 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.520448 kubelet[3286]: W0124 00:49:17.520290 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.520448 kubelet[3286]: E0124 00:49:17.520312 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.520719 kubelet[3286]: E0124 00:49:17.520706 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.520917 kubelet[3286]: W0124 00:49:17.520791 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.520917 kubelet[3286]: E0124 00:49:17.520809 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.521201 kubelet[3286]: E0124 00:49:17.521187 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.521509 kubelet[3286]: W0124 00:49:17.521303 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.521509 kubelet[3286]: E0124 00:49:17.521325 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.522172 kubelet[3286]: E0124 00:49:17.521918 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.522172 kubelet[3286]: W0124 00:49:17.521933 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.522172 kubelet[3286]: E0124 00:49:17.521955 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.535628 kubelet[3286]: E0124 00:49:17.535465 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:17.535628 kubelet[3286]: W0124 00:49:17.535478 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:17.535628 kubelet[3286]: E0124 00:49:17.535490 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:17.537346 systemd[1]: Started cri-containerd-1bf56bd506be3c6f127e6b45bfd14c5ca9572ba90d592fcc698171e558ff5312.scope - libcontainer container 1bf56bd506be3c6f127e6b45bfd14c5ca9572ba90d592fcc698171e558ff5312. Jan 24 00:49:17.564309 containerd[1724]: time="2026-01-24T00:49:17.564276446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-92qft,Uid:a3f62309-2382-4868-9fba-e2b582639c56,Namespace:calico-system,Attempt:0,} returns sandbox id \"1bf56bd506be3c6f127e6b45bfd14c5ca9572ba90d592fcc698171e558ff5312\"" Jan 24 00:49:18.640717 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3185065127.mount: Deactivated successfully. Jan 24 00:49:19.064842 kubelet[3286]: E0124 00:49:19.064710 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:49:19.599527 containerd[1724]: time="2026-01-24T00:49:19.598477749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:19.604836 containerd[1724]: time="2026-01-24T00:49:19.604793126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:49:19.608740 containerd[1724]: time="2026-01-24T00:49:19.608709873Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:19.618180 containerd[1724]: time="2026-01-24T00:49:19.617404279Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:19.618488 containerd[1724]: time="2026-01-24T00:49:19.618453692Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.14865681s" Jan 24 00:49:19.618627 containerd[1724]: time="2026-01-24T00:49:19.618607994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:49:19.620561 containerd[1724]: time="2026-01-24T00:49:19.620528017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:49:19.645643 containerd[1724]: time="2026-01-24T00:49:19.645596921Z" level=info msg="CreateContainer within sandbox \"99ed94c47412461fb060c2410502a6a29676afcc77d16c17b415a8b38882c6ec\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:49:19.699087 containerd[1724]: time="2026-01-24T00:49:19.699036771Z" level=info msg="CreateContainer within sandbox \"99ed94c47412461fb060c2410502a6a29676afcc77d16c17b415a8b38882c6ec\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fe51bf2748cf23aa4c1b657da7ec8d679080ec7504852e4ddde4c2efaefd874c\"" Jan 24 00:49:19.700271 containerd[1724]: time="2026-01-24T00:49:19.700237785Z" level=info msg="StartContainer for \"fe51bf2748cf23aa4c1b657da7ec8d679080ec7504852e4ddde4c2efaefd874c\"" Jan 24 00:49:19.762358 systemd[1]: Started cri-containerd-fe51bf2748cf23aa4c1b657da7ec8d679080ec7504852e4ddde4c2efaefd874c.scope - libcontainer container fe51bf2748cf23aa4c1b657da7ec8d679080ec7504852e4ddde4c2efaefd874c. Jan 24 00:49:19.853504 containerd[1724]: time="2026-01-24T00:49:19.853397445Z" level=info msg="StartContainer for \"fe51bf2748cf23aa4c1b657da7ec8d679080ec7504852e4ddde4c2efaefd874c\" returns successfully" Jan 24 00:49:20.191629 kubelet[3286]: E0124 00:49:20.191491 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.191629 kubelet[3286]: W0124 00:49:20.191518 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.191629 kubelet[3286]: E0124 00:49:20.191541 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.192721 kubelet[3286]: E0124 00:49:20.192601 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.192721 kubelet[3286]: W0124 00:49:20.192615 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.192721 kubelet[3286]: E0124 00:49:20.192630 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.194299 kubelet[3286]: E0124 00:49:20.192848 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.194299 kubelet[3286]: W0124 00:49:20.192857 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.194299 kubelet[3286]: E0124 00:49:20.192868 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.194513 kubelet[3286]: E0124 00:49:20.194393 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.194513 kubelet[3286]: W0124 00:49:20.194405 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.194513 kubelet[3286]: E0124 00:49:20.194419 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.194756 kubelet[3286]: E0124 00:49:20.194650 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.194756 kubelet[3286]: W0124 00:49:20.194659 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.194756 kubelet[3286]: E0124 00:49:20.194669 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.195102 kubelet[3286]: E0124 00:49:20.195005 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.195102 kubelet[3286]: W0124 00:49:20.195017 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.195102 kubelet[3286]: E0124 00:49:20.195026 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.195359 kubelet[3286]: E0124 00:49:20.195304 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.195359 kubelet[3286]: W0124 00:49:20.195314 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.195359 kubelet[3286]: E0124 00:49:20.195323 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.195724 kubelet[3286]: E0124 00:49:20.195615 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.195724 kubelet[3286]: W0124 00:49:20.195626 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.195724 kubelet[3286]: E0124 00:49:20.195637 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.196204 kubelet[3286]: E0124 00:49:20.195902 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.196204 kubelet[3286]: W0124 00:49:20.195911 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.196204 kubelet[3286]: E0124 00:49:20.195920 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.197630 kubelet[3286]: E0124 00:49:20.196563 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.197630 kubelet[3286]: W0124 00:49:20.196578 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.197630 kubelet[3286]: E0124 00:49:20.196594 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.198061 kubelet[3286]: E0124 00:49:20.197944 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.198061 kubelet[3286]: W0124 00:49:20.197959 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.198061 kubelet[3286]: E0124 00:49:20.197973 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.198535 kubelet[3286]: E0124 00:49:20.198407 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.198535 kubelet[3286]: W0124 00:49:20.198421 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.198535 kubelet[3286]: E0124 00:49:20.198435 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.198995 kubelet[3286]: E0124 00:49:20.198820 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.198995 kubelet[3286]: W0124 00:49:20.198834 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.198995 kubelet[3286]: E0124 00:49:20.198846 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.199368 kubelet[3286]: E0124 00:49:20.199292 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.199368 kubelet[3286]: W0124 00:49:20.199306 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.199368 kubelet[3286]: E0124 00:49:20.199319 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.201505 kubelet[3286]: E0124 00:49:20.201419 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.201505 kubelet[3286]: W0124 00:49:20.201434 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.201505 kubelet[3286]: E0124 00:49:20.201447 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.219165 kubelet[3286]: E0124 00:49:20.219062 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.219165 kubelet[3286]: W0124 00:49:20.219081 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.219165 kubelet[3286]: E0124 00:49:20.219096 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.219971 kubelet[3286]: E0124 00:49:20.219666 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.219971 kubelet[3286]: W0124 00:49:20.219785 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.219971 kubelet[3286]: E0124 00:49:20.219801 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.221887 kubelet[3286]: E0124 00:49:20.221772 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.221887 kubelet[3286]: W0124 00:49:20.221790 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.221887 kubelet[3286]: E0124 00:49:20.221805 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.222432 kubelet[3286]: E0124 00:49:20.222309 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.222432 kubelet[3286]: W0124 00:49:20.222323 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.222432 kubelet[3286]: E0124 00:49:20.222337 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.223485 kubelet[3286]: E0124 00:49:20.223371 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.223485 kubelet[3286]: W0124 00:49:20.223387 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.223485 kubelet[3286]: E0124 00:49:20.223401 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.224739 kubelet[3286]: E0124 00:49:20.224572 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.224739 kubelet[3286]: W0124 00:49:20.224586 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.224739 kubelet[3286]: E0124 00:49:20.224603 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.225267 kubelet[3286]: E0124 00:49:20.225222 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.225267 kubelet[3286]: W0124 00:49:20.225238 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.225267 kubelet[3286]: E0124 00:49:20.225252 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.226892 kubelet[3286]: E0124 00:49:20.226754 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.226892 kubelet[3286]: W0124 00:49:20.226766 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.226892 kubelet[3286]: E0124 00:49:20.226778 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.227162 kubelet[3286]: E0124 00:49:20.227102 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.227162 kubelet[3286]: W0124 00:49:20.227113 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.227162 kubelet[3286]: E0124 00:49:20.227122 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.227423 kubelet[3286]: E0124 00:49:20.227407 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.227423 kubelet[3286]: W0124 00:49:20.227423 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.227547 kubelet[3286]: E0124 00:49:20.227436 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.227991 kubelet[3286]: E0124 00:49:20.227821 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.227991 kubelet[3286]: W0124 00:49:20.227835 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.227991 kubelet[3286]: E0124 00:49:20.227848 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.228258 kubelet[3286]: E0124 00:49:20.228222 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.228258 kubelet[3286]: W0124 00:49:20.228232 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.228258 kubelet[3286]: E0124 00:49:20.228244 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.228785 kubelet[3286]: E0124 00:49:20.228684 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.228785 kubelet[3286]: W0124 00:49:20.228697 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.228785 kubelet[3286]: E0124 00:49:20.228709 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.229182 kubelet[3286]: E0124 00:49:20.229097 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.229182 kubelet[3286]: W0124 00:49:20.229108 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.229182 kubelet[3286]: E0124 00:49:20.229118 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.230728 kubelet[3286]: E0124 00:49:20.230218 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.230728 kubelet[3286]: W0124 00:49:20.230231 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.230728 kubelet[3286]: E0124 00:49:20.230245 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.231508 kubelet[3286]: E0124 00:49:20.231401 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.231508 kubelet[3286]: W0124 00:49:20.231417 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.231508 kubelet[3286]: E0124 00:49:20.231431 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.233322 kubelet[3286]: E0124 00:49:20.232265 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.233322 kubelet[3286]: W0124 00:49:20.232279 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.233322 kubelet[3286]: E0124 00:49:20.232292 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.233322 kubelet[3286]: E0124 00:49:20.232519 3286 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:20.233322 kubelet[3286]: W0124 00:49:20.232530 3286 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:20.233322 kubelet[3286]: E0124 00:49:20.232545 3286 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:20.820780 containerd[1724]: time="2026-01-24T00:49:20.820732595Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:20.824730 containerd[1724]: time="2026-01-24T00:49:20.824648542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:49:20.828837 containerd[1724]: time="2026-01-24T00:49:20.828619190Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:20.834248 containerd[1724]: time="2026-01-24T00:49:20.833533350Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:20.834369 containerd[1724]: time="2026-01-24T00:49:20.834136157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.213247936s" Jan 24 00:49:20.834549 containerd[1724]: time="2026-01-24T00:49:20.834526162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:49:20.841991 containerd[1724]: time="2026-01-24T00:49:20.841963652Z" level=info msg="CreateContainer within sandbox \"1bf56bd506be3c6f127e6b45bfd14c5ca9572ba90d592fcc698171e558ff5312\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:49:20.900986 containerd[1724]: time="2026-01-24T00:49:20.900946969Z" level=info msg="CreateContainer within sandbox \"1bf56bd506be3c6f127e6b45bfd14c5ca9572ba90d592fcc698171e558ff5312\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"69a554f584c284fb85a76fd306424b63c36ae292bc889501c6bb9e5013fdecc1\"" Jan 24 00:49:20.901989 containerd[1724]: time="2026-01-24T00:49:20.901398274Z" level=info msg="StartContainer for \"69a554f584c284fb85a76fd306424b63c36ae292bc889501c6bb9e5013fdecc1\"" Jan 24 00:49:20.938599 systemd[1]: Started cri-containerd-69a554f584c284fb85a76fd306424b63c36ae292bc889501c6bb9e5013fdecc1.scope - libcontainer container 69a554f584c284fb85a76fd306424b63c36ae292bc889501c6bb9e5013fdecc1. Jan 24 00:49:20.970453 containerd[1724]: time="2026-01-24T00:49:20.970378212Z" level=info msg="StartContainer for \"69a554f584c284fb85a76fd306424b63c36ae292bc889501c6bb9e5013fdecc1\" returns successfully" Jan 24 00:49:20.978106 systemd[1]: cri-containerd-69a554f584c284fb85a76fd306424b63c36ae292bc889501c6bb9e5013fdecc1.scope: Deactivated successfully. Jan 24 00:49:21.002302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69a554f584c284fb85a76fd306424b63c36ae292bc889501c6bb9e5013fdecc1-rootfs.mount: Deactivated successfully. Jan 24 00:49:21.064817 kubelet[3286]: E0124 00:49:21.064779 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:49:21.148429 kubelet[3286]: I0124 00:49:21.147930 3286 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:49:21.164483 kubelet[3286]: I0124 00:49:21.164428 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-58c895ddc8-mnppp" podStartSLOduration=3.013274629 podStartE2EDuration="5.164409569s" podCreationTimestamp="2026-01-24 00:49:16 +0000 UTC" firstStartedPulling="2026-01-24 00:49:17.469056073 +0000 UTC m=+21.519200629" lastFinishedPulling="2026-01-24 00:49:19.620190913 +0000 UTC m=+23.670335569" observedRunningTime="2026-01-24 00:49:20.174998652 +0000 UTC m=+24.225143208" watchObservedRunningTime="2026-01-24 00:49:21.164409569 +0000 UTC m=+25.214554125" Jan 24 00:49:22.490833 containerd[1724]: time="2026-01-24T00:49:22.490754179Z" level=info msg="shim disconnected" id=69a554f584c284fb85a76fd306424b63c36ae292bc889501c6bb9e5013fdecc1 namespace=k8s.io Jan 24 00:49:22.490833 containerd[1724]: time="2026-01-24T00:49:22.490818479Z" level=warning msg="cleaning up after shim disconnected" id=69a554f584c284fb85a76fd306424b63c36ae292bc889501c6bb9e5013fdecc1 namespace=k8s.io Jan 24 00:49:22.490833 containerd[1724]: time="2026-01-24T00:49:22.490831180Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:49:23.064768 kubelet[3286]: E0124 00:49:23.064703 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:49:23.154509 containerd[1724]: time="2026-01-24T00:49:23.154098836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:49:25.064672 kubelet[3286]: E0124 00:49:25.064612 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:49:26.302872 containerd[1724]: time="2026-01-24T00:49:26.302823487Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:26.306467 containerd[1724]: time="2026-01-24T00:49:26.306403128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:49:26.310050 containerd[1724]: time="2026-01-24T00:49:26.309985270Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:26.315348 containerd[1724]: time="2026-01-24T00:49:26.315132230Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:26.315858 containerd[1724]: time="2026-01-24T00:49:26.315824738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.161590001s" Jan 24 00:49:26.315941 containerd[1724]: time="2026-01-24T00:49:26.315864439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:49:26.328973 containerd[1724]: time="2026-01-24T00:49:26.328940891Z" level=info msg="CreateContainer within sandbox \"1bf56bd506be3c6f127e6b45bfd14c5ca9572ba90d592fcc698171e558ff5312\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:49:26.371153 containerd[1724]: time="2026-01-24T00:49:26.371103782Z" level=info msg="CreateContainer within sandbox \"1bf56bd506be3c6f127e6b45bfd14c5ca9572ba90d592fcc698171e558ff5312\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"806e8b57f0f69cf2281a094eb11f3894523cd99fda6af4f9056183c098c80d78\"" Jan 24 00:49:26.371842 containerd[1724]: time="2026-01-24T00:49:26.371807990Z" level=info msg="StartContainer for \"806e8b57f0f69cf2281a094eb11f3894523cd99fda6af4f9056183c098c80d78\"" Jan 24 00:49:26.404346 systemd[1]: Started cri-containerd-806e8b57f0f69cf2281a094eb11f3894523cd99fda6af4f9056183c098c80d78.scope - libcontainer container 806e8b57f0f69cf2281a094eb11f3894523cd99fda6af4f9056183c098c80d78. Jan 24 00:49:26.438897 containerd[1724]: time="2026-01-24T00:49:26.438859271Z" level=info msg="StartContainer for \"806e8b57f0f69cf2281a094eb11f3894523cd99fda6af4f9056183c098c80d78\" returns successfully" Jan 24 00:49:27.064669 kubelet[3286]: E0124 00:49:27.064623 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:49:28.097809 containerd[1724]: time="2026-01-24T00:49:28.097757487Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:49:28.100640 systemd[1]: cri-containerd-806e8b57f0f69cf2281a094eb11f3894523cd99fda6af4f9056183c098c80d78.scope: Deactivated successfully. Jan 24 00:49:28.144591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-806e8b57f0f69cf2281a094eb11f3894523cd99fda6af4f9056183c098c80d78-rootfs.mount: Deactivated successfully. Jan 24 00:49:28.202795 kubelet[3286]: I0124 00:49:28.202763 3286 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:49:29.343775 systemd[1]: Created slice kubepods-burstable-pod83de2bc6_05d2_4a24_a80e_44ff528a5b2e.slice - libcontainer container kubepods-burstable-pod83de2bc6_05d2_4a24_a80e_44ff528a5b2e.slice. Jan 24 00:49:29.345775 containerd[1724]: time="2026-01-24T00:49:29.344399004Z" level=info msg="shim disconnected" id=806e8b57f0f69cf2281a094eb11f3894523cd99fda6af4f9056183c098c80d78 namespace=k8s.io Jan 24 00:49:29.345775 containerd[1724]: time="2026-01-24T00:49:29.344473905Z" level=warning msg="cleaning up after shim disconnected" id=806e8b57f0f69cf2281a094eb11f3894523cd99fda6af4f9056183c098c80d78 namespace=k8s.io Jan 24 00:49:29.345775 containerd[1724]: time="2026-01-24T00:49:29.344485005Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:49:29.360570 systemd[1]: Created slice kubepods-besteffort-podf1ccdfdb_e7f0_4061_88d9_d4f165d7633c.slice - libcontainer container kubepods-besteffort-podf1ccdfdb_e7f0_4061_88d9_d4f165d7633c.slice. Jan 24 00:49:29.372643 systemd[1]: Created slice kubepods-burstable-pod11df6f13_f663_4026_809b_f40550f91486.slice - libcontainer container kubepods-burstable-pod11df6f13_f663_4026_809b_f40550f91486.slice. Jan 24 00:49:29.383444 containerd[1724]: time="2026-01-24T00:49:29.383382158Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:49:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:49:29.387056 kubelet[3286]: I0124 00:49:29.387022 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhqg4\" (UniqueName: \"kubernetes.io/projected/11df6f13-f663-4026-809b-f40550f91486-kube-api-access-lhqg4\") pod \"coredns-674b8bbfcf-rxr9t\" (UID: \"11df6f13-f663-4026-809b-f40550f91486\") " pod="kube-system/coredns-674b8bbfcf-rxr9t" Jan 24 00:49:29.388176 kubelet[3286]: I0124 00:49:29.387602 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvbq9\" (UniqueName: \"kubernetes.io/projected/83de2bc6-05d2-4a24-a80e-44ff528a5b2e-kube-api-access-tvbq9\") pod \"coredns-674b8bbfcf-8wvv8\" (UID: \"83de2bc6-05d2-4a24-a80e-44ff528a5b2e\") " pod="kube-system/coredns-674b8bbfcf-8wvv8" Jan 24 00:49:29.388176 kubelet[3286]: I0124 00:49:29.387650 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1ccdfdb-e7f0-4061-88d9-d4f165d7633c-tigera-ca-bundle\") pod \"calico-kube-controllers-7cbf78d979-7mqd4\" (UID: \"f1ccdfdb-e7f0-4061-88d9-d4f165d7633c\") " pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" Jan 24 00:49:29.388176 kubelet[3286]: I0124 00:49:29.387689 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw48x\" (UniqueName: \"kubernetes.io/projected/f1ccdfdb-e7f0-4061-88d9-d4f165d7633c-kube-api-access-fw48x\") pod \"calico-kube-controllers-7cbf78d979-7mqd4\" (UID: \"f1ccdfdb-e7f0-4061-88d9-d4f165d7633c\") " pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" Jan 24 00:49:29.388176 kubelet[3286]: I0124 00:49:29.387717 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/11df6f13-f663-4026-809b-f40550f91486-config-volume\") pod \"coredns-674b8bbfcf-rxr9t\" (UID: \"11df6f13-f663-4026-809b-f40550f91486\") " pod="kube-system/coredns-674b8bbfcf-rxr9t" Jan 24 00:49:29.388176 kubelet[3286]: I0124 00:49:29.387759 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83de2bc6-05d2-4a24-a80e-44ff528a5b2e-config-volume\") pod \"coredns-674b8bbfcf-8wvv8\" (UID: \"83de2bc6-05d2-4a24-a80e-44ff528a5b2e\") " pod="kube-system/coredns-674b8bbfcf-8wvv8" Jan 24 00:49:29.391115 systemd[1]: Created slice kubepods-besteffort-pod10ee6a73_4c28_4f2c_91e8_1fb5bb1182ff.slice - libcontainer container kubepods-besteffort-pod10ee6a73_4c28_4f2c_91e8_1fb5bb1182ff.slice. Jan 24 00:49:29.404832 systemd[1]: Created slice kubepods-besteffort-poda99a2457_cb1f_40ec_b343_9e0ed2df6091.slice - libcontainer container kubepods-besteffort-poda99a2457_cb1f_40ec_b343_9e0ed2df6091.slice. Jan 24 00:49:29.419270 systemd[1]: Created slice kubepods-besteffort-pod11ebe218_698b_4f81_b5c6_5227731a6439.slice - libcontainer container kubepods-besteffort-pod11ebe218_698b_4f81_b5c6_5227731a6439.slice. Jan 24 00:49:29.426426 containerd[1724]: time="2026-01-24T00:49:29.426283557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw7t4,Uid:11ebe218-698b-4f81-b5c6-5227731a6439,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:29.432531 systemd[1]: Created slice kubepods-besteffort-pod6978497b_e576_4331_b78d_a328fd7094ed.slice - libcontainer container kubepods-besteffort-pod6978497b_e576_4331_b78d_a328fd7094ed.slice. Jan 24 00:49:29.439670 systemd[1]: Created slice kubepods-besteffort-podc5ffb552_fd5c_4233_a4d0_bee61c2df92f.slice - libcontainer container kubepods-besteffort-podc5ffb552_fd5c_4233_a4d0_bee61c2df92f.slice. Jan 24 00:49:29.491797 kubelet[3286]: I0124 00:49:29.488305 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z72nt\" (UniqueName: \"kubernetes.io/projected/a99a2457-cb1f-40ec-b343-9e0ed2df6091-kube-api-access-z72nt\") pod \"goldmane-666569f655-665v9\" (UID: \"a99a2457-cb1f-40ec-b343-9e0ed2df6091\") " pod="calico-system/goldmane-666569f655-665v9" Jan 24 00:49:29.491797 kubelet[3286]: I0124 00:49:29.488352 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jg2lz\" (UniqueName: \"kubernetes.io/projected/c5ffb552-fd5c-4233-a4d0-bee61c2df92f-kube-api-access-jg2lz\") pod \"calico-apiserver-6956bf9c49-lv44h\" (UID: \"c5ffb552-fd5c-4233-a4d0-bee61c2df92f\") " pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" Jan 24 00:49:29.491797 kubelet[3286]: I0124 00:49:29.488383 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff-calico-apiserver-certs\") pod \"calico-apiserver-6956bf9c49-kjw59\" (UID: \"10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff\") " pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" Jan 24 00:49:29.491797 kubelet[3286]: I0124 00:49:29.489413 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a99a2457-cb1f-40ec-b343-9e0ed2df6091-config\") pod \"goldmane-666569f655-665v9\" (UID: \"a99a2457-cb1f-40ec-b343-9e0ed2df6091\") " pod="calico-system/goldmane-666569f655-665v9" Jan 24 00:49:29.491797 kubelet[3286]: I0124 00:49:29.489516 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6978497b-e576-4331-b78d-a328fd7094ed-whisker-backend-key-pair\") pod \"whisker-cd866fd95-t6ls8\" (UID: \"6978497b-e576-4331-b78d-a328fd7094ed\") " pod="calico-system/whisker-cd866fd95-t6ls8" Jan 24 00:49:29.492176 kubelet[3286]: I0124 00:49:29.489858 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a99a2457-cb1f-40ec-b343-9e0ed2df6091-goldmane-key-pair\") pod \"goldmane-666569f655-665v9\" (UID: \"a99a2457-cb1f-40ec-b343-9e0ed2df6091\") " pod="calico-system/goldmane-666569f655-665v9" Jan 24 00:49:29.492176 kubelet[3286]: I0124 00:49:29.490258 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzdgt\" (UniqueName: \"kubernetes.io/projected/10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff-kube-api-access-fzdgt\") pod \"calico-apiserver-6956bf9c49-kjw59\" (UID: \"10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff\") " pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" Jan 24 00:49:29.492176 kubelet[3286]: I0124 00:49:29.490961 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgtwm\" (UniqueName: \"kubernetes.io/projected/6978497b-e576-4331-b78d-a328fd7094ed-kube-api-access-dgtwm\") pod \"whisker-cd866fd95-t6ls8\" (UID: \"6978497b-e576-4331-b78d-a328fd7094ed\") " pod="calico-system/whisker-cd866fd95-t6ls8" Jan 24 00:49:29.492534 kubelet[3286]: I0124 00:49:29.492382 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6978497b-e576-4331-b78d-a328fd7094ed-whisker-ca-bundle\") pod \"whisker-cd866fd95-t6ls8\" (UID: \"6978497b-e576-4331-b78d-a328fd7094ed\") " pod="calico-system/whisker-cd866fd95-t6ls8" Jan 24 00:49:29.501068 kubelet[3286]: I0124 00:49:29.496347 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a99a2457-cb1f-40ec-b343-9e0ed2df6091-goldmane-ca-bundle\") pod \"goldmane-666569f655-665v9\" (UID: \"a99a2457-cb1f-40ec-b343-9e0ed2df6091\") " pod="calico-system/goldmane-666569f655-665v9" Jan 24 00:49:29.501068 kubelet[3286]: I0124 00:49:29.496387 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c5ffb552-fd5c-4233-a4d0-bee61c2df92f-calico-apiserver-certs\") pod \"calico-apiserver-6956bf9c49-lv44h\" (UID: \"c5ffb552-fd5c-4233-a4d0-bee61c2df92f\") " pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" Jan 24 00:49:29.574093 containerd[1724]: time="2026-01-24T00:49:29.574044378Z" level=error msg="Failed to destroy network for sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.574441 containerd[1724]: time="2026-01-24T00:49:29.574413982Z" level=error msg="encountered an error cleaning up failed sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.574554 containerd[1724]: time="2026-01-24T00:49:29.574479283Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw7t4,Uid:11ebe218-698b-4f81-b5c6-5227731a6439,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.574818 kubelet[3286]: E0124 00:49:29.574765 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.574944 kubelet[3286]: E0124 00:49:29.574853 3286 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hw7t4" Jan 24 00:49:29.574944 kubelet[3286]: E0124 00:49:29.574880 3286 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-hw7t4" Jan 24 00:49:29.575071 kubelet[3286]: E0124 00:49:29.574949 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-hw7t4_calico-system(11ebe218-698b-4f81-b5c6-5227731a6439)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-hw7t4_calico-system(11ebe218-698b-4f81-b5c6-5227731a6439)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:49:29.650776 containerd[1724]: time="2026-01-24T00:49:29.650722471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8wvv8,Uid:83de2bc6-05d2-4a24-a80e-44ff528a5b2e,Namespace:kube-system,Attempt:0,}" Jan 24 00:49:29.667238 containerd[1724]: time="2026-01-24T00:49:29.667194562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cbf78d979-7mqd4,Uid:f1ccdfdb-e7f0-4061-88d9-d4f165d7633c,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:29.678111 containerd[1724]: time="2026-01-24T00:49:29.678074989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxr9t,Uid:11df6f13-f663-4026-809b-f40550f91486,Namespace:kube-system,Attempt:0,}" Jan 24 00:49:29.698949 containerd[1724]: time="2026-01-24T00:49:29.698909832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6956bf9c49-kjw59,Uid:10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:49:29.709836 containerd[1724]: time="2026-01-24T00:49:29.709799858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-665v9,Uid:a99a2457-cb1f-40ec-b343-9e0ed2df6091,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:29.736025 containerd[1724]: time="2026-01-24T00:49:29.735969963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd866fd95-t6ls8,Uid:6978497b-e576-4331-b78d-a328fd7094ed,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:29.746019 containerd[1724]: time="2026-01-24T00:49:29.745977180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6956bf9c49-lv44h,Uid:c5ffb552-fd5c-4233-a4d0-bee61c2df92f,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:49:29.813038 containerd[1724]: time="2026-01-24T00:49:29.812940359Z" level=error msg="Failed to destroy network for sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.813730 containerd[1724]: time="2026-01-24T00:49:29.813474566Z" level=error msg="encountered an error cleaning up failed sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.813842 containerd[1724]: time="2026-01-24T00:49:29.813647168Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8wvv8,Uid:83de2bc6-05d2-4a24-a80e-44ff528a5b2e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.814234 kubelet[3286]: E0124 00:49:29.814107 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.814234 kubelet[3286]: E0124 00:49:29.814216 3286 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8wvv8" Jan 24 00:49:29.814464 kubelet[3286]: E0124 00:49:29.814258 3286 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-8wvv8" Jan 24 00:49:29.814464 kubelet[3286]: E0124 00:49:29.814364 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-8wvv8_kube-system(83de2bc6-05d2-4a24-a80e-44ff528a5b2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-8wvv8_kube-system(83de2bc6-05d2-4a24-a80e-44ff528a5b2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8wvv8" podUID="83de2bc6-05d2-4a24-a80e-44ff528a5b2e" Jan 24 00:49:29.934899 containerd[1724]: time="2026-01-24T00:49:29.934504775Z" level=error msg="Failed to destroy network for sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.936559 containerd[1724]: time="2026-01-24T00:49:29.936495998Z" level=error msg="encountered an error cleaning up failed sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.937644 containerd[1724]: time="2026-01-24T00:49:29.937599311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxr9t,Uid:11df6f13-f663-4026-809b-f40550f91486,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.938746 kubelet[3286]: E0124 00:49:29.938694 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.938843 kubelet[3286]: E0124 00:49:29.938767 3286 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rxr9t" Jan 24 00:49:29.938843 kubelet[3286]: E0124 00:49:29.938800 3286 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rxr9t" Jan 24 00:49:29.938986 kubelet[3286]: E0124 00:49:29.938868 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rxr9t_kube-system(11df6f13-f663-4026-809b-f40550f91486)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rxr9t_kube-system(11df6f13-f663-4026-809b-f40550f91486)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rxr9t" podUID="11df6f13-f663-4026-809b-f40550f91486" Jan 24 00:49:29.965299 containerd[1724]: time="2026-01-24T00:49:29.965252733Z" level=error msg="Failed to destroy network for sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.965815 containerd[1724]: time="2026-01-24T00:49:29.965780639Z" level=error msg="encountered an error cleaning up failed sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.969294 containerd[1724]: time="2026-01-24T00:49:29.969258780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cbf78d979-7mqd4,Uid:f1ccdfdb-e7f0-4061-88d9-d4f165d7633c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.969639 kubelet[3286]: E0124 00:49:29.969607 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:29.969747 kubelet[3286]: E0124 00:49:29.969676 3286 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" Jan 24 00:49:29.969747 kubelet[3286]: E0124 00:49:29.969708 3286 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" Jan 24 00:49:29.970002 kubelet[3286]: E0124 00:49:29.969778 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cbf78d979-7mqd4_calico-system(f1ccdfdb-e7f0-4061-88d9-d4f165d7633c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cbf78d979-7mqd4_calico-system(f1ccdfdb-e7f0-4061-88d9-d4f165d7633c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:49:30.026990 containerd[1724]: time="2026-01-24T00:49:30.026932851Z" level=error msg="Failed to destroy network for sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.027456 containerd[1724]: time="2026-01-24T00:49:30.027303556Z" level=error msg="encountered an error cleaning up failed sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.027456 containerd[1724]: time="2026-01-24T00:49:30.027373656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6956bf9c49-kjw59,Uid:10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.027759 kubelet[3286]: E0124 00:49:30.027670 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.027759 kubelet[3286]: E0124 00:49:30.027734 3286 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" Jan 24 00:49:30.028007 kubelet[3286]: E0124 00:49:30.027781 3286 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" Jan 24 00:49:30.028007 kubelet[3286]: E0124 00:49:30.027845 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6956bf9c49-kjw59_calico-apiserver(10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6956bf9c49-kjw59_calico-apiserver(10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:49:30.037634 containerd[1724]: time="2026-01-24T00:49:30.037530575Z" level=error msg="Failed to destroy network for sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.038161 containerd[1724]: time="2026-01-24T00:49:30.038092381Z" level=error msg="encountered an error cleaning up failed sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.038343 containerd[1724]: time="2026-01-24T00:49:30.038300884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cd866fd95-t6ls8,Uid:6978497b-e576-4331-b78d-a328fd7094ed,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.038673 kubelet[3286]: E0124 00:49:30.038603 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.038778 kubelet[3286]: E0124 00:49:30.038675 3286 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cd866fd95-t6ls8" Jan 24 00:49:30.038778 kubelet[3286]: E0124 00:49:30.038702 3286 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-cd866fd95-t6ls8" Jan 24 00:49:30.038917 kubelet[3286]: E0124 00:49:30.038781 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cd866fd95-t6ls8_calico-system(6978497b-e576-4331-b78d-a328fd7094ed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cd866fd95-t6ls8_calico-system(6978497b-e576-4331-b78d-a328fd7094ed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cd866fd95-t6ls8" podUID="6978497b-e576-4331-b78d-a328fd7094ed" Jan 24 00:49:30.050480 containerd[1724]: time="2026-01-24T00:49:30.050415425Z" level=error msg="Failed to destroy network for sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.051025 containerd[1724]: time="2026-01-24T00:49:30.050986131Z" level=error msg="encountered an error cleaning up failed sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.051198 containerd[1724]: time="2026-01-24T00:49:30.051167533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-665v9,Uid:a99a2457-cb1f-40ec-b343-9e0ed2df6091,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.052485 kubelet[3286]: E0124 00:49:30.051530 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.052485 kubelet[3286]: E0124 00:49:30.051591 3286 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-665v9" Jan 24 00:49:30.052485 kubelet[3286]: E0124 00:49:30.051618 3286 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-665v9" Jan 24 00:49:30.052678 kubelet[3286]: E0124 00:49:30.051693 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-665v9_calico-system(a99a2457-cb1f-40ec-b343-9e0ed2df6091)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-665v9_calico-system(a99a2457-cb1f-40ec-b343-9e0ed2df6091)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:49:30.058272 containerd[1724]: time="2026-01-24T00:49:30.057872012Z" level=error msg="Failed to destroy network for sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.058272 containerd[1724]: time="2026-01-24T00:49:30.058124814Z" level=error msg="encountered an error cleaning up failed sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.058272 containerd[1724]: time="2026-01-24T00:49:30.058179615Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6956bf9c49-lv44h,Uid:c5ffb552-fd5c-4233-a4d0-bee61c2df92f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.058494 kubelet[3286]: E0124 00:49:30.058400 3286 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.058494 kubelet[3286]: E0124 00:49:30.058454 3286 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" Jan 24 00:49:30.058494 kubelet[3286]: E0124 00:49:30.058478 3286 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" Jan 24 00:49:30.058637 kubelet[3286]: E0124 00:49:30.058538 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6956bf9c49-lv44h_calico-apiserver(c5ffb552-fd5c-4233-a4d0-bee61c2df92f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6956bf9c49-lv44h_calico-apiserver(c5ffb552-fd5c-4233-a4d0-bee61c2df92f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:49:30.169231 kubelet[3286]: I0124 00:49:30.169190 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:30.170216 containerd[1724]: time="2026-01-24T00:49:30.170170919Z" level=info msg="StopPodSandbox for \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\"" Jan 24 00:49:30.170426 containerd[1724]: time="2026-01-24T00:49:30.170398322Z" level=info msg="Ensure that sandbox 0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b in task-service has been cleanup successfully" Jan 24 00:49:30.173982 kubelet[3286]: I0124 00:49:30.173904 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:30.174587 containerd[1724]: time="2026-01-24T00:49:30.174549870Z" level=info msg="StopPodSandbox for \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\"" Jan 24 00:49:30.177004 containerd[1724]: time="2026-01-24T00:49:30.176971998Z" level=info msg="Ensure that sandbox e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be in task-service has been cleanup successfully" Jan 24 00:49:30.179667 kubelet[3286]: I0124 00:49:30.179615 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:30.180830 containerd[1724]: time="2026-01-24T00:49:30.180731242Z" level=info msg="StopPodSandbox for \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\"" Jan 24 00:49:30.180999 containerd[1724]: time="2026-01-24T00:49:30.180900944Z" level=info msg="Ensure that sandbox d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809 in task-service has been cleanup successfully" Jan 24 00:49:30.184126 kubelet[3286]: I0124 00:49:30.184021 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:30.184754 containerd[1724]: time="2026-01-24T00:49:30.184592187Z" level=info msg="StopPodSandbox for \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\"" Jan 24 00:49:30.184832 containerd[1724]: time="2026-01-24T00:49:30.184784189Z" level=info msg="Ensure that sandbox 2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6 in task-service has been cleanup successfully" Jan 24 00:49:30.192069 containerd[1724]: time="2026-01-24T00:49:30.191968173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:49:30.206464 kubelet[3286]: I0124 00:49:30.205621 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:30.207932 containerd[1724]: time="2026-01-24T00:49:30.207898958Z" level=info msg="StopPodSandbox for \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\"" Jan 24 00:49:30.209190 containerd[1724]: time="2026-01-24T00:49:30.208561466Z" level=info msg="Ensure that sandbox aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d in task-service has been cleanup successfully" Jan 24 00:49:30.221086 kubelet[3286]: I0124 00:49:30.221059 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:30.222657 containerd[1724]: time="2026-01-24T00:49:30.222579729Z" level=info msg="StopPodSandbox for \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\"" Jan 24 00:49:30.224943 containerd[1724]: time="2026-01-24T00:49:30.224799555Z" level=info msg="Ensure that sandbox 086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9 in task-service has been cleanup successfully" Jan 24 00:49:30.234173 kubelet[3286]: I0124 00:49:30.233854 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:30.237182 containerd[1724]: time="2026-01-24T00:49:30.237115199Z" level=info msg="StopPodSandbox for \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\"" Jan 24 00:49:30.240712 containerd[1724]: time="2026-01-24T00:49:30.240680840Z" level=info msg="Ensure that sandbox 5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c in task-service has been cleanup successfully" Jan 24 00:49:30.245969 kubelet[3286]: I0124 00:49:30.245859 3286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:30.249917 containerd[1724]: time="2026-01-24T00:49:30.248770634Z" level=info msg="StopPodSandbox for \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\"" Jan 24 00:49:30.252785 containerd[1724]: time="2026-01-24T00:49:30.252739381Z" level=info msg="Ensure that sandbox 86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5 in task-service has been cleanup successfully" Jan 24 00:49:30.295367 containerd[1724]: time="2026-01-24T00:49:30.295306776Z" level=error msg="StopPodSandbox for \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\" failed" error="failed to destroy network for sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.295603 kubelet[3286]: E0124 00:49:30.295565 3286 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:30.295702 kubelet[3286]: E0124 00:49:30.295635 3286 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be"} Jan 24 00:49:30.295761 kubelet[3286]: E0124 00:49:30.295708 3286 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11df6f13-f663-4026-809b-f40550f91486\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:30.295761 kubelet[3286]: E0124 00:49:30.295742 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11df6f13-f663-4026-809b-f40550f91486\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rxr9t" podUID="11df6f13-f663-4026-809b-f40550f91486" Jan 24 00:49:30.338258 containerd[1724]: time="2026-01-24T00:49:30.338189076Z" level=error msg="StopPodSandbox for \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\" failed" error="failed to destroy network for sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.340104 kubelet[3286]: E0124 00:49:30.339597 3286 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:30.340104 kubelet[3286]: E0124 00:49:30.339666 3286 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c"} Jan 24 00:49:30.340104 kubelet[3286]: E0124 00:49:30.339707 3286 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6978497b-e576-4331-b78d-a328fd7094ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:30.340104 kubelet[3286]: E0124 00:49:30.339737 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6978497b-e576-4331-b78d-a328fd7094ed\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-cd866fd95-t6ls8" podUID="6978497b-e576-4331-b78d-a328fd7094ed" Jan 24 00:49:30.342874 containerd[1724]: time="2026-01-24T00:49:30.342563027Z" level=error msg="StopPodSandbox for \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\" failed" error="failed to destroy network for sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.342994 kubelet[3286]: E0124 00:49:30.342868 3286 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:30.342994 kubelet[3286]: E0124 00:49:30.342925 3286 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809"} Jan 24 00:49:30.342994 kubelet[3286]: E0124 00:49:30.342965 3286 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f1ccdfdb-e7f0-4061-88d9-d4f165d7633c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:30.343255 kubelet[3286]: E0124 00:49:30.342994 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f1ccdfdb-e7f0-4061-88d9-d4f165d7633c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:49:30.357058 containerd[1724]: time="2026-01-24T00:49:30.357009095Z" level=error msg="StopPodSandbox for \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\" failed" error="failed to destroy network for sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.360166 kubelet[3286]: E0124 00:49:30.359054 3286 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:30.360166 kubelet[3286]: E0124 00:49:30.359118 3286 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b"} Jan 24 00:49:30.360166 kubelet[3286]: E0124 00:49:30.359197 3286 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:30.360166 kubelet[3286]: E0124 00:49:30.359229 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:49:30.360625 containerd[1724]: time="2026-01-24T00:49:30.360584936Z" level=error msg="StopPodSandbox for \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\" failed" error="failed to destroy network for sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.360924 kubelet[3286]: E0124 00:49:30.360886 3286 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:30.361009 kubelet[3286]: E0124 00:49:30.360940 3286 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6"} Jan 24 00:49:30.361009 kubelet[3286]: E0124 00:49:30.360977 3286 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11ebe218-698b-4f81-b5c6-5227731a6439\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:30.361126 kubelet[3286]: E0124 00:49:30.361005 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11ebe218-698b-4f81-b5c6-5227731a6439\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:49:30.375193 containerd[1724]: time="2026-01-24T00:49:30.375129306Z" level=error msg="StopPodSandbox for \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\" failed" error="failed to destroy network for sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.376113 kubelet[3286]: E0124 00:49:30.376075 3286 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:30.376365 kubelet[3286]: E0124 00:49:30.376296 3286 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9"} Jan 24 00:49:30.376558 kubelet[3286]: E0124 00:49:30.376431 3286 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"83de2bc6-05d2-4a24-a80e-44ff528a5b2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:30.376558 kubelet[3286]: E0124 00:49:30.376468 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"83de2bc6-05d2-4a24-a80e-44ff528a5b2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-8wvv8" podUID="83de2bc6-05d2-4a24-a80e-44ff528a5b2e" Jan 24 00:49:30.380395 containerd[1724]: time="2026-01-24T00:49:30.380357467Z" level=error msg="StopPodSandbox for \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\" failed" error="failed to destroy network for sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.380812 kubelet[3286]: E0124 00:49:30.380664 3286 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:30.380812 kubelet[3286]: E0124 00:49:30.380713 3286 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d"} Jan 24 00:49:30.380812 kubelet[3286]: E0124 00:49:30.380751 3286 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5ffb552-fd5c-4233-a4d0-bee61c2df92f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:30.380812 kubelet[3286]: E0124 00:49:30.380780 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5ffb552-fd5c-4233-a4d0-bee61c2df92f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:49:30.383127 containerd[1724]: time="2026-01-24T00:49:30.383087698Z" level=error msg="StopPodSandbox for \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\" failed" error="failed to destroy network for sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:30.383331 kubelet[3286]: E0124 00:49:30.383302 3286 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:30.383412 kubelet[3286]: E0124 00:49:30.383339 3286 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5"} Jan 24 00:49:30.383412 kubelet[3286]: E0124 00:49:30.383370 3286 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a99a2457-cb1f-40ec-b343-9e0ed2df6091\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:30.383412 kubelet[3286]: E0124 00:49:30.383398 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a99a2457-cb1f-40ec-b343-9e0ed2df6091\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:49:30.509662 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6-shm.mount: Deactivated successfully. Jan 24 00:49:36.234919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734791337.mount: Deactivated successfully. Jan 24 00:49:36.280930 containerd[1724]: time="2026-01-24T00:49:36.280879005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:36.283801 containerd[1724]: time="2026-01-24T00:49:36.283736640Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:49:36.286911 containerd[1724]: time="2026-01-24T00:49:36.286858677Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:36.291157 containerd[1724]: time="2026-01-24T00:49:36.291071029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:36.292208 containerd[1724]: time="2026-01-24T00:49:36.291667836Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.099657263s" Jan 24 00:49:36.292208 containerd[1724]: time="2026-01-24T00:49:36.291709636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:49:36.317085 containerd[1724]: time="2026-01-24T00:49:36.317039943Z" level=info msg="CreateContainer within sandbox \"1bf56bd506be3c6f127e6b45bfd14c5ca9572ba90d592fcc698171e558ff5312\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:49:36.358261 containerd[1724]: time="2026-01-24T00:49:36.358124941Z" level=info msg="CreateContainer within sandbox \"1bf56bd506be3c6f127e6b45bfd14c5ca9572ba90d592fcc698171e558ff5312\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ae5835b30a19a19351cc2f947df04ca670300e33a57b30732088d5bb0ebe9701\"" Jan 24 00:49:36.359376 containerd[1724]: time="2026-01-24T00:49:36.359336656Z" level=info msg="StartContainer for \"ae5835b30a19a19351cc2f947df04ca670300e33a57b30732088d5bb0ebe9701\"" Jan 24 00:49:36.391327 systemd[1]: Started cri-containerd-ae5835b30a19a19351cc2f947df04ca670300e33a57b30732088d5bb0ebe9701.scope - libcontainer container ae5835b30a19a19351cc2f947df04ca670300e33a57b30732088d5bb0ebe9701. Jan 24 00:49:36.424935 containerd[1724]: time="2026-01-24T00:49:36.424798250Z" level=info msg="StartContainer for \"ae5835b30a19a19351cc2f947df04ca670300e33a57b30732088d5bb0ebe9701\" returns successfully" Jan 24 00:49:36.729652 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:49:36.729799 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:49:36.875342 containerd[1724]: time="2026-01-24T00:49:36.875297411Z" level=info msg="StopPodSandbox for \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\"" Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:36.961 [INFO][4505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:36.961 [INFO][4505] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" iface="eth0" netns="/var/run/netns/cni-1c13dcb8-649d-f31e-fb2f-ba60cc90ae58" Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:36.962 [INFO][4505] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" iface="eth0" netns="/var/run/netns/cni-1c13dcb8-649d-f31e-fb2f-ba60cc90ae58" Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:36.963 [INFO][4505] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" iface="eth0" netns="/var/run/netns/cni-1c13dcb8-649d-f31e-fb2f-ba60cc90ae58" Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:36.963 [INFO][4505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:36.963 [INFO][4505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:37.003 [INFO][4513] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" HandleID="k8s-pod-network.5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--cd866fd95--t6ls8-eth0" Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:37.003 [INFO][4513] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:37.004 [INFO][4513] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:37.011 [WARNING][4513] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" HandleID="k8s-pod-network.5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--cd866fd95--t6ls8-eth0" Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:37.011 [INFO][4513] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" HandleID="k8s-pod-network.5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--cd866fd95--t6ls8-eth0" Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:37.013 [INFO][4513] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:37.018957 containerd[1724]: 2026-01-24 00:49:37.016 [INFO][4505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:37.020008 containerd[1724]: time="2026-01-24T00:49:37.019372358Z" level=info msg="TearDown network for sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\" successfully" Jan 24 00:49:37.020008 containerd[1724]: time="2026-01-24T00:49:37.019413858Z" level=info msg="StopPodSandbox for \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\" returns successfully" Jan 24 00:49:37.043502 kubelet[3286]: I0124 00:49:37.043463 3286 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:49:37.156539 kubelet[3286]: I0124 00:49:37.156488 3286 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6978497b-e576-4331-b78d-a328fd7094ed-whisker-backend-key-pair\") pod \"6978497b-e576-4331-b78d-a328fd7094ed\" (UID: \"6978497b-e576-4331-b78d-a328fd7094ed\") " Jan 24 00:49:37.156539 kubelet[3286]: I0124 00:49:37.156547 3286 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgtwm\" (UniqueName: \"kubernetes.io/projected/6978497b-e576-4331-b78d-a328fd7094ed-kube-api-access-dgtwm\") pod \"6978497b-e576-4331-b78d-a328fd7094ed\" (UID: \"6978497b-e576-4331-b78d-a328fd7094ed\") " Jan 24 00:49:37.156858 kubelet[3286]: I0124 00:49:37.156571 3286 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6978497b-e576-4331-b78d-a328fd7094ed-whisker-ca-bundle\") pod \"6978497b-e576-4331-b78d-a328fd7094ed\" (UID: \"6978497b-e576-4331-b78d-a328fd7094ed\") " Jan 24 00:49:37.157776 kubelet[3286]: I0124 00:49:37.157497 3286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6978497b-e576-4331-b78d-a328fd7094ed-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6978497b-e576-4331-b78d-a328fd7094ed" (UID: "6978497b-e576-4331-b78d-a328fd7094ed"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:49:37.160674 kubelet[3286]: I0124 00:49:37.160643 3286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6978497b-e576-4331-b78d-a328fd7094ed-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6978497b-e576-4331-b78d-a328fd7094ed" (UID: "6978497b-e576-4331-b78d-a328fd7094ed"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:49:37.160982 kubelet[3286]: I0124 00:49:37.160955 3286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6978497b-e576-4331-b78d-a328fd7094ed-kube-api-access-dgtwm" (OuterVolumeSpecName: "kube-api-access-dgtwm") pod "6978497b-e576-4331-b78d-a328fd7094ed" (UID: "6978497b-e576-4331-b78d-a328fd7094ed"). InnerVolumeSpecName "kube-api-access-dgtwm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:49:37.234594 systemd[1]: run-netns-cni\x2d1c13dcb8\x2d649d\x2df31e\x2dfb2f\x2dba60cc90ae58.mount: Deactivated successfully. Jan 24 00:49:37.234748 systemd[1]: var-lib-kubelet-pods-6978497b\x2de576\x2d4331\x2db78d\x2da328fd7094ed-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddgtwm.mount: Deactivated successfully. Jan 24 00:49:37.234858 systemd[1]: var-lib-kubelet-pods-6978497b\x2de576\x2d4331\x2db78d\x2da328fd7094ed-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:49:37.257061 kubelet[3286]: I0124 00:49:37.257022 3286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dgtwm\" (UniqueName: \"kubernetes.io/projected/6978497b-e576-4331-b78d-a328fd7094ed-kube-api-access-dgtwm\") on node \"ci-4081.3.6-n-f1b70866be\" DevicePath \"\"" Jan 24 00:49:37.257061 kubelet[3286]: I0124 00:49:37.257058 3286 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6978497b-e576-4331-b78d-a328fd7094ed-whisker-ca-bundle\") on node \"ci-4081.3.6-n-f1b70866be\" DevicePath \"\"" Jan 24 00:49:37.257249 kubelet[3286]: I0124 00:49:37.257071 3286 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6978497b-e576-4331-b78d-a328fd7094ed-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-f1b70866be\" DevicePath \"\"" Jan 24 00:49:37.269646 systemd[1]: Removed slice kubepods-besteffort-pod6978497b_e576_4331_b78d_a328fd7094ed.slice - libcontainer container kubepods-besteffort-pod6978497b_e576_4331_b78d_a328fd7094ed.slice. Jan 24 00:49:37.287312 kubelet[3286]: I0124 00:49:37.287254 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-92qft" podStartSLOduration=1.56101613 podStartE2EDuration="20.287233805s" podCreationTimestamp="2026-01-24 00:49:17 +0000 UTC" firstStartedPulling="2026-01-24 00:49:17.566265771 +0000 UTC m=+21.616410327" lastFinishedPulling="2026-01-24 00:49:36.292483446 +0000 UTC m=+40.342628002" observedRunningTime="2026-01-24 00:49:37.285676686 +0000 UTC m=+41.335821242" watchObservedRunningTime="2026-01-24 00:49:37.287233805 +0000 UTC m=+41.337378461" Jan 24 00:49:37.375811 systemd[1]: Created slice kubepods-besteffort-podd86b5148_8637_436b_954a_278a7b8ba7a4.slice - libcontainer container kubepods-besteffort-podd86b5148_8637_436b_954a_278a7b8ba7a4.slice. Jan 24 00:49:37.458598 kubelet[3286]: I0124 00:49:37.458533 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d86b5148-8637-436b-954a-278a7b8ba7a4-whisker-ca-bundle\") pod \"whisker-66577d4d7-zhfd5\" (UID: \"d86b5148-8637-436b-954a-278a7b8ba7a4\") " pod="calico-system/whisker-66577d4d7-zhfd5" Jan 24 00:49:37.458598 kubelet[3286]: I0124 00:49:37.458602 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9m6mz\" (UniqueName: \"kubernetes.io/projected/d86b5148-8637-436b-954a-278a7b8ba7a4-kube-api-access-9m6mz\") pod \"whisker-66577d4d7-zhfd5\" (UID: \"d86b5148-8637-436b-954a-278a7b8ba7a4\") " pod="calico-system/whisker-66577d4d7-zhfd5" Jan 24 00:49:37.458820 kubelet[3286]: I0124 00:49:37.458633 3286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/d86b5148-8637-436b-954a-278a7b8ba7a4-whisker-backend-key-pair\") pod \"whisker-66577d4d7-zhfd5\" (UID: \"d86b5148-8637-436b-954a-278a7b8ba7a4\") " pod="calico-system/whisker-66577d4d7-zhfd5" Jan 24 00:49:37.681349 containerd[1724]: time="2026-01-24T00:49:37.681296783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66577d4d7-zhfd5,Uid:d86b5148-8637-436b-954a-278a7b8ba7a4,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:37.856094 systemd-networkd[1368]: cali9cf132ca8bc: Link UP Jan 24 00:49:37.857356 systemd-networkd[1368]: cali9cf132ca8bc: Gained carrier Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.760 [INFO][4536] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.774 [INFO][4536] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0 whisker-66577d4d7- calico-system d86b5148-8637-436b-954a-278a7b8ba7a4 934 0 2026-01-24 00:49:37 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66577d4d7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-f1b70866be whisker-66577d4d7-zhfd5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9cf132ca8bc [] [] }} ContainerID="30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" Namespace="calico-system" Pod="whisker-66577d4d7-zhfd5" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.774 [INFO][4536] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" Namespace="calico-system" Pod="whisker-66577d4d7-zhfd5" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.798 [INFO][4549] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" HandleID="k8s-pod-network.30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.798 [INFO][4549] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" HandleID="k8s-pod-network.30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f2a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f1b70866be", "pod":"whisker-66577d4d7-zhfd5", "timestamp":"2026-01-24 00:49:37.798279901 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f1b70866be", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.798 [INFO][4549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.798 [INFO][4549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.798 [INFO][4549] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f1b70866be' Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.805 [INFO][4549] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.808 [INFO][4549] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.811 [INFO][4549] ipam/ipam.go 511: Trying affinity for 192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.813 [INFO][4549] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.814 [INFO][4549] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.815 [INFO][4549] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.128/26 handle="k8s-pod-network.30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.816 [INFO][4549] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.822 [INFO][4549] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.128/26 handle="k8s-pod-network.30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.827 [INFO][4549] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.129/26] block=192.168.95.128/26 handle="k8s-pod-network.30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.827 [INFO][4549] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.129/26] handle="k8s-pod-network.30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.827 [INFO][4549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:37.875298 containerd[1724]: 2026-01-24 00:49:37.827 [INFO][4549] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.129/26] IPv6=[] ContainerID="30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" HandleID="k8s-pod-network.30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0" Jan 24 00:49:37.876894 containerd[1724]: 2026-01-24 00:49:37.829 [INFO][4536] cni-plugin/k8s.go 418: Populated endpoint ContainerID="30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" Namespace="calico-system" Pod="whisker-66577d4d7-zhfd5" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0", GenerateName:"whisker-66577d4d7-", Namespace:"calico-system", SelfLink:"", UID:"d86b5148-8637-436b-954a-278a7b8ba7a4", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66577d4d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"", Pod:"whisker-66577d4d7-zhfd5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9cf132ca8bc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:37.876894 containerd[1724]: 2026-01-24 00:49:37.829 [INFO][4536] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.129/32] ContainerID="30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" Namespace="calico-system" Pod="whisker-66577d4d7-zhfd5" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0" Jan 24 00:49:37.876894 containerd[1724]: 2026-01-24 00:49:37.829 [INFO][4536] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9cf132ca8bc ContainerID="30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" Namespace="calico-system" Pod="whisker-66577d4d7-zhfd5" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0" Jan 24 00:49:37.876894 containerd[1724]: 2026-01-24 00:49:37.857 [INFO][4536] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" Namespace="calico-system" Pod="whisker-66577d4d7-zhfd5" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0" Jan 24 00:49:37.876894 containerd[1724]: 2026-01-24 00:49:37.858 [INFO][4536] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" Namespace="calico-system" Pod="whisker-66577d4d7-zhfd5" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0", GenerateName:"whisker-66577d4d7-", Namespace:"calico-system", SelfLink:"", UID:"d86b5148-8637-436b-954a-278a7b8ba7a4", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66577d4d7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f", Pod:"whisker-66577d4d7-zhfd5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.95.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9cf132ca8bc", MAC:"96:86:ed:99:9a:14", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:37.876894 containerd[1724]: 2026-01-24 00:49:37.872 [INFO][4536] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f" Namespace="calico-system" Pod="whisker-66577d4d7-zhfd5" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-whisker--66577d4d7--zhfd5-eth0" Jan 24 00:49:37.896117 containerd[1724]: time="2026-01-24T00:49:37.895809483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:37.896117 containerd[1724]: time="2026-01-24T00:49:37.895854184Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:37.896117 containerd[1724]: time="2026-01-24T00:49:37.895863984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:37.896117 containerd[1724]: time="2026-01-24T00:49:37.895926585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:37.915371 systemd[1]: Started cri-containerd-30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f.scope - libcontainer container 30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f. Jan 24 00:49:37.956508 containerd[1724]: time="2026-01-24T00:49:37.955118602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66577d4d7-zhfd5,Uid:d86b5148-8637-436b-954a-278a7b8ba7a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"30e4d820f21575d4523d8447a607ed554ed1ab1ed69182ef7cb3481530680e0f\"" Jan 24 00:49:37.958305 containerd[1724]: time="2026-01-24T00:49:37.958272240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:49:38.067836 kubelet[3286]: I0124 00:49:38.067781 3286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6978497b-e576-4331-b78d-a328fd7094ed" path="/var/lib/kubelet/pods/6978497b-e576-4331-b78d-a328fd7094ed/volumes" Jan 24 00:49:38.220313 containerd[1724]: time="2026-01-24T00:49:38.218987701Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:38.223024 containerd[1724]: time="2026-01-24T00:49:38.222890448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:49:38.223024 containerd[1724]: time="2026-01-24T00:49:38.222983350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:49:38.223964 kubelet[3286]: E0124 00:49:38.223353 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:49:38.223964 kubelet[3286]: E0124 00:49:38.223422 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:49:38.224128 kubelet[3286]: E0124 00:49:38.223628 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a7e553b4077f468cbd98dfe225bad7bf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9m6mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66577d4d7-zhfd5_calico-system(d86b5148-8637-436b-954a-278a7b8ba7a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:38.226455 containerd[1724]: time="2026-01-24T00:49:38.226424391Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:49:38.509475 containerd[1724]: time="2026-01-24T00:49:38.509349521Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:38.510262 kernel: bpftool[4705]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:49:38.513159 containerd[1724]: time="2026-01-24T00:49:38.513069166Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:49:38.513285 containerd[1724]: time="2026-01-24T00:49:38.513216068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:49:38.513533 kubelet[3286]: E0124 00:49:38.513493 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:49:38.513923 kubelet[3286]: E0124 00:49:38.513675 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:49:38.514413 kubelet[3286]: E0124 00:49:38.514353 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9m6mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66577d4d7-zhfd5_calico-system(d86b5148-8637-436b-954a-278a7b8ba7a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:38.515977 kubelet[3286]: E0124 00:49:38.515829 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:49:38.821783 systemd-networkd[1368]: vxlan.calico: Link UP Jan 24 00:49:38.821796 systemd-networkd[1368]: vxlan.calico: Gained carrier Jan 24 00:49:39.273500 kubelet[3286]: E0124 00:49:39.273427 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:49:39.891349 systemd-networkd[1368]: cali9cf132ca8bc: Gained IPv6LL Jan 24 00:49:40.467387 systemd-networkd[1368]: vxlan.calico: Gained IPv6LL Jan 24 00:49:41.066336 containerd[1724]: time="2026-01-24T00:49:41.066175718Z" level=info msg="StopPodSandbox for \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\"" Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.120 [INFO][4812] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.120 [INFO][4812] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" iface="eth0" netns="/var/run/netns/cni-4b06eef0-913c-2e11-c5b9-95d9629d8f3a" Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.120 [INFO][4812] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" iface="eth0" netns="/var/run/netns/cni-4b06eef0-913c-2e11-c5b9-95d9629d8f3a" Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.120 [INFO][4812] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" iface="eth0" netns="/var/run/netns/cni-4b06eef0-913c-2e11-c5b9-95d9629d8f3a" Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.120 [INFO][4812] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.120 [INFO][4812] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.140 [INFO][4819] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" HandleID="k8s-pod-network.0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.140 [INFO][4819] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.140 [INFO][4819] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.148 [WARNING][4819] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" HandleID="k8s-pod-network.0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.148 [INFO][4819] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" HandleID="k8s-pod-network.0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.149 [INFO][4819] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:41.151770 containerd[1724]: 2026-01-24 00:49:41.150 [INFO][4812] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:41.154306 containerd[1724]: time="2026-01-24T00:49:41.154260086Z" level=info msg="TearDown network for sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\" successfully" Jan 24 00:49:41.154306 containerd[1724]: time="2026-01-24T00:49:41.154305187Z" level=info msg="StopPodSandbox for \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\" returns successfully" Jan 24 00:49:41.156266 containerd[1724]: time="2026-01-24T00:49:41.155229298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6956bf9c49-kjw59,Uid:10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:49:41.157017 systemd[1]: run-netns-cni\x2d4b06eef0\x2d913c\x2d2e11\x2dc5b9\x2d95d9629d8f3a.mount: Deactivated successfully. Jan 24 00:49:41.316828 systemd-networkd[1368]: cali6288898b4d2: Link UP Jan 24 00:49:41.319461 systemd-networkd[1368]: cali6288898b4d2: Gained carrier Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.247 [INFO][4826] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0 calico-apiserver-6956bf9c49- calico-apiserver 10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff 962 0 2026-01-24 00:49:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6956bf9c49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-f1b70866be calico-apiserver-6956bf9c49-kjw59 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6288898b4d2 [] [] }} ContainerID="7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-kjw59" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.247 [INFO][4826] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-kjw59" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.274 [INFO][4838] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" HandleID="k8s-pod-network.7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.274 [INFO][4838] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" HandleID="k8s-pod-network.7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cafe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-f1b70866be", "pod":"calico-apiserver-6956bf9c49-kjw59", "timestamp":"2026-01-24 00:49:41.274112039 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f1b70866be", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.274 [INFO][4838] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.274 [INFO][4838] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.274 [INFO][4838] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f1b70866be' Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.282 [INFO][4838] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.286 [INFO][4838] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.290 [INFO][4838] ipam/ipam.go 511: Trying affinity for 192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.291 [INFO][4838] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.293 [INFO][4838] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.293 [INFO][4838] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.128/26 handle="k8s-pod-network.7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.294 [INFO][4838] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.302 [INFO][4838] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.128/26 handle="k8s-pod-network.7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.311 [INFO][4838] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.130/26] block=192.168.95.128/26 handle="k8s-pod-network.7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.311 [INFO][4838] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.130/26] handle="k8s-pod-network.7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.311 [INFO][4838] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:41.339068 containerd[1724]: 2026-01-24 00:49:41.311 [INFO][4838] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.130/26] IPv6=[] ContainerID="7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" HandleID="k8s-pod-network.7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:41.340043 containerd[1724]: 2026-01-24 00:49:41.313 [INFO][4826] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-kjw59" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0", GenerateName:"calico-apiserver-6956bf9c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6956bf9c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"", Pod:"calico-apiserver-6956bf9c49-kjw59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6288898b4d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:41.340043 containerd[1724]: 2026-01-24 00:49:41.313 [INFO][4826] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.130/32] ContainerID="7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-kjw59" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:41.340043 containerd[1724]: 2026-01-24 00:49:41.314 [INFO][4826] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6288898b4d2 ContainerID="7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-kjw59" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:41.340043 containerd[1724]: 2026-01-24 00:49:41.319 [INFO][4826] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-kjw59" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:41.340043 containerd[1724]: 2026-01-24 00:49:41.320 [INFO][4826] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-kjw59" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0", GenerateName:"calico-apiserver-6956bf9c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff", ResourceVersion:"962", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6956bf9c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf", Pod:"calico-apiserver-6956bf9c49-kjw59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6288898b4d2", MAC:"16:8d:bd:47:00:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:41.340043 containerd[1724]: 2026-01-24 00:49:41.336 [INFO][4826] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-kjw59" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:41.365500 containerd[1724]: time="2026-01-24T00:49:41.365165743Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:41.365500 containerd[1724]: time="2026-01-24T00:49:41.365225944Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:41.365500 containerd[1724]: time="2026-01-24T00:49:41.365245844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:41.365500 containerd[1724]: time="2026-01-24T00:49:41.365340045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:41.393285 systemd[1]: Started cri-containerd-7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf.scope - libcontainer container 7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf. Jan 24 00:49:41.432502 containerd[1724]: time="2026-01-24T00:49:41.432439059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6956bf9c49-kjw59,Uid:10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf\"" Jan 24 00:49:41.436181 containerd[1724]: time="2026-01-24T00:49:41.436125503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:49:41.699617 containerd[1724]: time="2026-01-24T00:49:41.699568597Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:41.709116 containerd[1724]: time="2026-01-24T00:49:41.708996611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:49:41.709116 containerd[1724]: time="2026-01-24T00:49:41.709061212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:49:41.709391 kubelet[3286]: E0124 00:49:41.709249 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:49:41.709391 kubelet[3286]: E0124 00:49:41.709320 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:49:41.709826 kubelet[3286]: E0124 00:49:41.709475 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6956bf9c49-kjw59_calico-apiserver(10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:41.710967 kubelet[3286]: E0124 00:49:41.710888 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:49:42.067742 containerd[1724]: time="2026-01-24T00:49:42.067229054Z" level=info msg="StopPodSandbox for \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\"" Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.118 [INFO][4902] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.118 [INFO][4902] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" iface="eth0" netns="/var/run/netns/cni-6e73e422-e6a3-ffd1-f82c-490accf16298" Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.119 [INFO][4902] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" iface="eth0" netns="/var/run/netns/cni-6e73e422-e6a3-ffd1-f82c-490accf16298" Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.120 [INFO][4902] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" iface="eth0" netns="/var/run/netns/cni-6e73e422-e6a3-ffd1-f82c-490accf16298" Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.120 [INFO][4902] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.120 [INFO][4902] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.140 [INFO][4909] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" HandleID="k8s-pod-network.86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.140 [INFO][4909] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.140 [INFO][4909] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.148 [WARNING][4909] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" HandleID="k8s-pod-network.86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.148 [INFO][4909] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" HandleID="k8s-pod-network.86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.149 [INFO][4909] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:42.151968 containerd[1724]: 2026-01-24 00:49:42.150 [INFO][4902] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:42.156171 containerd[1724]: time="2026-01-24T00:49:42.154494512Z" level=info msg="TearDown network for sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\" successfully" Jan 24 00:49:42.156171 containerd[1724]: time="2026-01-24T00:49:42.154539112Z" level=info msg="StopPodSandbox for \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\" returns successfully" Jan 24 00:49:42.156171 containerd[1724]: time="2026-01-24T00:49:42.155282321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-665v9,Uid:a99a2457-cb1f-40ec-b343-9e0ed2df6091,Namespace:calico-system,Attempt:1,}" Jan 24 00:49:42.155906 systemd[1]: run-netns-cni\x2d6e73e422\x2de6a3\x2dffd1\x2df82c\x2d490accf16298.mount: Deactivated successfully. Jan 24 00:49:42.284743 kubelet[3286]: E0124 00:49:42.284357 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:49:42.375108 systemd-networkd[1368]: cali6e48413f1ee: Link UP Jan 24 00:49:42.377928 systemd-networkd[1368]: cali6e48413f1ee: Gained carrier Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.259 [INFO][4916] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0 goldmane-666569f655- calico-system a99a2457-cb1f-40ec-b343-9e0ed2df6091 971 0 2026-01-24 00:49:15 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-f1b70866be goldmane-666569f655-665v9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6e48413f1ee [] [] }} ContainerID="2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" Namespace="calico-system" Pod="goldmane-666569f655-665v9" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.260 [INFO][4916] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" Namespace="calico-system" Pod="goldmane-666569f655-665v9" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.292 [INFO][4927] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" HandleID="k8s-pod-network.2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.292 [INFO][4927] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" HandleID="k8s-pod-network.2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f1b70866be", "pod":"goldmane-666569f655-665v9", "timestamp":"2026-01-24 00:49:42.292469681 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f1b70866be", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.292 [INFO][4927] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.292 [INFO][4927] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.292 [INFO][4927] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f1b70866be' Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.310 [INFO][4927] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.326 [INFO][4927] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.344 [INFO][4927] ipam/ipam.go 511: Trying affinity for 192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.347 [INFO][4927] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.349 [INFO][4927] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.349 [INFO][4927] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.128/26 handle="k8s-pod-network.2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.350 [INFO][4927] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180 Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.354 [INFO][4927] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.128/26 handle="k8s-pod-network.2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.369 [INFO][4927] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.131/26] block=192.168.95.128/26 handle="k8s-pod-network.2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.369 [INFO][4927] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.131/26] handle="k8s-pod-network.2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.369 [INFO][4927] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:42.416096 containerd[1724]: 2026-01-24 00:49:42.369 [INFO][4927] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.131/26] IPv6=[] ContainerID="2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" HandleID="k8s-pod-network.2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:42.417021 containerd[1724]: 2026-01-24 00:49:42.372 [INFO][4916] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" Namespace="calico-system" Pod="goldmane-666569f655-665v9" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a99a2457-cb1f-40ec-b343-9e0ed2df6091", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"", Pod:"goldmane-666569f655-665v9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e48413f1ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:42.417021 containerd[1724]: 2026-01-24 00:49:42.372 [INFO][4916] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.131/32] ContainerID="2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" Namespace="calico-system" Pod="goldmane-666569f655-665v9" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:42.417021 containerd[1724]: 2026-01-24 00:49:42.372 [INFO][4916] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6e48413f1ee ContainerID="2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" Namespace="calico-system" Pod="goldmane-666569f655-665v9" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:42.417021 containerd[1724]: 2026-01-24 00:49:42.376 [INFO][4916] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" Namespace="calico-system" Pod="goldmane-666569f655-665v9" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:42.417021 containerd[1724]: 2026-01-24 00:49:42.377 [INFO][4916] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" Namespace="calico-system" Pod="goldmane-666569f655-665v9" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a99a2457-cb1f-40ec-b343-9e0ed2df6091", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180", Pod:"goldmane-666569f655-665v9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e48413f1ee", MAC:"26:72:88:e4:67:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:42.417021 containerd[1724]: 2026-01-24 00:49:42.409 [INFO][4916] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180" Namespace="calico-system" Pod="goldmane-666569f655-665v9" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:42.466187 containerd[1724]: time="2026-01-24T00:49:42.460084708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:42.466187 containerd[1724]: time="2026-01-24T00:49:42.460193709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:42.466187 containerd[1724]: time="2026-01-24T00:49:42.460216009Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:42.466187 containerd[1724]: time="2026-01-24T00:49:42.460323811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:42.506858 systemd[1]: Started cri-containerd-2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180.scope - libcontainer container 2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180. Jan 24 00:49:42.624873 containerd[1724]: time="2026-01-24T00:49:42.624718799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-665v9,Uid:a99a2457-cb1f-40ec-b343-9e0ed2df6091,Namespace:calico-system,Attempt:1,} returns sandbox id \"2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180\"" Jan 24 00:49:42.628294 containerd[1724]: time="2026-01-24T00:49:42.627981138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:49:42.903746 containerd[1724]: time="2026-01-24T00:49:42.903591172Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:42.906607 containerd[1724]: time="2026-01-24T00:49:42.906550007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:49:42.906745 containerd[1724]: time="2026-01-24T00:49:42.906569708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:49:42.907006 kubelet[3286]: E0124 00:49:42.906952 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:49:42.907470 kubelet[3286]: E0124 00:49:42.907012 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:49:42.907470 kubelet[3286]: E0124 00:49:42.907258 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z72nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-665v9_calico-system(a99a2457-cb1f-40ec-b343-9e0ed2df6091): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:42.908871 kubelet[3286]: E0124 00:49:42.908787 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:49:43.066213 containerd[1724]: time="2026-01-24T00:49:43.065947635Z" level=info msg="StopPodSandbox for \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\"" Jan 24 00:49:43.155361 systemd-networkd[1368]: cali6288898b4d2: Gained IPv6LL Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.121 [INFO][4996] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.121 [INFO][4996] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" iface="eth0" netns="/var/run/netns/cni-743075dd-175a-6f5e-6326-aed9c8221ee4" Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.121 [INFO][4996] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" iface="eth0" netns="/var/run/netns/cni-743075dd-175a-6f5e-6326-aed9c8221ee4" Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.122 [INFO][4996] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" iface="eth0" netns="/var/run/netns/cni-743075dd-175a-6f5e-6326-aed9c8221ee4" Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.122 [INFO][4996] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.122 [INFO][4996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.153 [INFO][5004] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" HandleID="k8s-pod-network.086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.154 [INFO][5004] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.154 [INFO][5004] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.162 [WARNING][5004] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" HandleID="k8s-pod-network.086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.162 [INFO][5004] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" HandleID="k8s-pod-network.086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.163 [INFO][5004] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:43.168479 containerd[1724]: 2026-01-24 00:49:43.166 [INFO][4996] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:43.170448 containerd[1724]: time="2026-01-24T00:49:43.169207084Z" level=info msg="TearDown network for sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\" successfully" Jan 24 00:49:43.170448 containerd[1724]: time="2026-01-24T00:49:43.169242985Z" level=info msg="StopPodSandbox for \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\" returns successfully" Jan 24 00:49:43.172120 containerd[1724]: time="2026-01-24T00:49:43.170774503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8wvv8,Uid:83de2bc6-05d2-4a24-a80e-44ff528a5b2e,Namespace:kube-system,Attempt:1,}" Jan 24 00:49:43.176051 systemd[1]: run-netns-cni\x2d743075dd\x2d175a\x2d6f5e\x2d6326\x2daed9c8221ee4.mount: Deactivated successfully. Jan 24 00:49:43.288487 kubelet[3286]: E0124 00:49:43.288425 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:49:43.291905 kubelet[3286]: E0124 00:49:43.290161 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:49:43.366186 systemd-networkd[1368]: cali69e8c05b64b: Link UP Jan 24 00:49:43.366619 systemd-networkd[1368]: cali69e8c05b64b: Gained carrier Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.250 [INFO][5011] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0 coredns-674b8bbfcf- kube-system 83de2bc6-05d2-4a24-a80e-44ff528a5b2e 987 0 2026-01-24 00:49:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-f1b70866be coredns-674b8bbfcf-8wvv8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69e8c05b64b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvv8" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.250 [INFO][5011] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvv8" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.275 [INFO][5022] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" HandleID="k8s-pod-network.9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.275 [INFO][5022] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" HandleID="k8s-pod-network.9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-f1b70866be", "pod":"coredns-674b8bbfcf-8wvv8", "timestamp":"2026-01-24 00:49:43.275693572 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f1b70866be", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.275 [INFO][5022] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.275 [INFO][5022] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.275 [INFO][5022] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f1b70866be' Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.283 [INFO][5022] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.294 [INFO][5022] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.316 [INFO][5022] ipam/ipam.go 511: Trying affinity for 192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.322 [INFO][5022] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.332 [INFO][5022] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.332 [INFO][5022] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.128/26 handle="k8s-pod-network.9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.336 [INFO][5022] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4 Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.345 [INFO][5022] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.128/26 handle="k8s-pod-network.9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.356 [INFO][5022] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.132/26] block=192.168.95.128/26 handle="k8s-pod-network.9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.356 [INFO][5022] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.132/26] handle="k8s-pod-network.9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.356 [INFO][5022] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:43.386517 containerd[1724]: 2026-01-24 00:49:43.356 [INFO][5022] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.132/26] IPv6=[] ContainerID="9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" HandleID="k8s-pod-network.9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:43.387428 containerd[1724]: 2026-01-24 00:49:43.360 [INFO][5011] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvv8" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"83de2bc6-05d2-4a24-a80e-44ff528a5b2e", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"", Pod:"coredns-674b8bbfcf-8wvv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69e8c05b64b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:43.387428 containerd[1724]: 2026-01-24 00:49:43.360 [INFO][5011] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.132/32] ContainerID="9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvv8" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:43.387428 containerd[1724]: 2026-01-24 00:49:43.360 [INFO][5011] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69e8c05b64b ContainerID="9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvv8" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:43.387428 containerd[1724]: 2026-01-24 00:49:43.367 [INFO][5011] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvv8" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:43.387428 containerd[1724]: 2026-01-24 00:49:43.367 [INFO][5011] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvv8" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"83de2bc6-05d2-4a24-a80e-44ff528a5b2e", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4", Pod:"coredns-674b8bbfcf-8wvv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69e8c05b64b", MAC:"26:c0:f6:0b:67:bd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:43.387428 containerd[1724]: 2026-01-24 00:49:43.382 [INFO][5011] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4" Namespace="kube-system" Pod="coredns-674b8bbfcf-8wvv8" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:43.414639 containerd[1724]: time="2026-01-24T00:49:43.414211847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:43.414639 containerd[1724]: time="2026-01-24T00:49:43.414337349Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:43.414639 containerd[1724]: time="2026-01-24T00:49:43.414376049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:43.414854 containerd[1724]: time="2026-01-24T00:49:43.414725153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:43.439362 systemd[1]: Started cri-containerd-9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4.scope - libcontainer container 9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4. Jan 24 00:49:43.491962 containerd[1724]: time="2026-01-24T00:49:43.491914387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8wvv8,Uid:83de2bc6-05d2-4a24-a80e-44ff528a5b2e,Namespace:kube-system,Attempt:1,} returns sandbox id \"9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4\"" Jan 24 00:49:43.501880 containerd[1724]: time="2026-01-24T00:49:43.501844207Z" level=info msg="CreateContainer within sandbox \"9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:49:43.541650 containerd[1724]: time="2026-01-24T00:49:43.541615088Z" level=info msg="CreateContainer within sandbox \"9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"38773b0bb6ef95b67d5f423a7c57988b1d68c05c82b201423815de4e383efa6f\"" Jan 24 00:49:43.542214 containerd[1724]: time="2026-01-24T00:49:43.542184295Z" level=info msg="StartContainer for \"38773b0bb6ef95b67d5f423a7c57988b1d68c05c82b201423815de4e383efa6f\"" Jan 24 00:49:43.569313 systemd[1]: Started cri-containerd-38773b0bb6ef95b67d5f423a7c57988b1d68c05c82b201423815de4e383efa6f.scope - libcontainer container 38773b0bb6ef95b67d5f423a7c57988b1d68c05c82b201423815de4e383efa6f. Jan 24 00:49:43.600362 containerd[1724]: time="2026-01-24T00:49:43.600315398Z" level=info msg="StartContainer for \"38773b0bb6ef95b67d5f423a7c57988b1d68c05c82b201423815de4e383efa6f\" returns successfully" Jan 24 00:49:43.987326 systemd-networkd[1368]: cali6e48413f1ee: Gained IPv6LL Jan 24 00:49:44.068061 containerd[1724]: time="2026-01-24T00:49:44.067860753Z" level=info msg="StopPodSandbox for \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\"" Jan 24 00:49:44.068487 containerd[1724]: time="2026-01-24T00:49:44.068262157Z" level=info msg="StopPodSandbox for \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\"" Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.134 [INFO][5134] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.137 [INFO][5134] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" iface="eth0" netns="/var/run/netns/cni-a8e959e0-db55-409c-dd3f-2540367d2614" Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.137 [INFO][5134] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" iface="eth0" netns="/var/run/netns/cni-a8e959e0-db55-409c-dd3f-2540367d2614" Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.138 [INFO][5134] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" iface="eth0" netns="/var/run/netns/cni-a8e959e0-db55-409c-dd3f-2540367d2614" Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.138 [INFO][5134] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.138 [INFO][5134] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.169 [INFO][5148] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" HandleID="k8s-pod-network.aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.170 [INFO][5148] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.170 [INFO][5148] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.179 [WARNING][5148] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" HandleID="k8s-pod-network.aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.180 [INFO][5148] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" HandleID="k8s-pod-network.aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.184 [INFO][5148] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:44.194332 containerd[1724]: 2026-01-24 00:49:44.189 [INFO][5134] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:44.195671 containerd[1724]: time="2026-01-24T00:49:44.194781988Z" level=info msg="TearDown network for sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\" successfully" Jan 24 00:49:44.200316 containerd[1724]: time="2026-01-24T00:49:44.194822188Z" level=info msg="StopPodSandbox for \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\" returns successfully" Jan 24 00:49:44.202166 containerd[1724]: time="2026-01-24T00:49:44.200535757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6956bf9c49-lv44h,Uid:c5ffb552-fd5c-4233-a4d0-bee61c2df92f,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:49:44.201330 systemd[1]: run-netns-cni\x2da8e959e0\x2ddb55\x2d409c\x2ddd3f\x2d2540367d2614.mount: Deactivated successfully. Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.148 [INFO][5135] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.148 [INFO][5135] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" iface="eth0" netns="/var/run/netns/cni-872bb134-816d-9d8e-b4f9-f16db4c411e3" Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.150 [INFO][5135] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" iface="eth0" netns="/var/run/netns/cni-872bb134-816d-9d8e-b4f9-f16db4c411e3" Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.150 [INFO][5135] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" iface="eth0" netns="/var/run/netns/cni-872bb134-816d-9d8e-b4f9-f16db4c411e3" Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.150 [INFO][5135] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.150 [INFO][5135] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.199 [INFO][5153] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" HandleID="k8s-pod-network.d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.200 [INFO][5153] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.203 [INFO][5153] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.214 [WARNING][5153] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" HandleID="k8s-pod-network.d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.214 [INFO][5153] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" HandleID="k8s-pod-network.d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.217 [INFO][5153] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:44.222261 containerd[1724]: 2026-01-24 00:49:44.218 [INFO][5135] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:44.225314 containerd[1724]: time="2026-01-24T00:49:44.225276556Z" level=info msg="TearDown network for sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\" successfully" Jan 24 00:49:44.225415 containerd[1724]: time="2026-01-24T00:49:44.225312757Z" level=info msg="StopPodSandbox for \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\" returns successfully" Jan 24 00:49:44.226756 systemd[1]: run-netns-cni\x2d872bb134\x2d816d\x2d9d8e\x2db4f9\x2df16db4c411e3.mount: Deactivated successfully. Jan 24 00:49:44.227884 containerd[1724]: time="2026-01-24T00:49:44.227770787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cbf78d979-7mqd4,Uid:f1ccdfdb-e7f0-4061-88d9-d4f165d7633c,Namespace:calico-system,Attempt:1,}" Jan 24 00:49:44.298091 kubelet[3286]: E0124 00:49:44.297954 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:49:44.329870 kubelet[3286]: I0124 00:49:44.329422 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8wvv8" podStartSLOduration=42.329402616 podStartE2EDuration="42.329402616s" podCreationTimestamp="2026-01-24 00:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:49:44.327430592 +0000 UTC m=+48.377575148" watchObservedRunningTime="2026-01-24 00:49:44.329402616 +0000 UTC m=+48.379547172" Jan 24 00:49:44.443947 systemd-networkd[1368]: caliec268dd8931: Link UP Jan 24 00:49:44.446486 systemd-networkd[1368]: caliec268dd8931: Gained carrier Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.290 [INFO][5166] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0 calico-apiserver-6956bf9c49- calico-apiserver c5ffb552-fd5c-4233-a4d0-bee61c2df92f 1006 0 2026-01-24 00:49:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6956bf9c49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-f1b70866be calico-apiserver-6956bf9c49-lv44h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliec268dd8931 [] [] }} ContainerID="61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-lv44h" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.292 [INFO][5166] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-lv44h" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.351 [INFO][5192] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" HandleID="k8s-pod-network.61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.351 [INFO][5192] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" HandleID="k8s-pod-network.61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-f1b70866be", "pod":"calico-apiserver-6956bf9c49-lv44h", "timestamp":"2026-01-24 00:49:44.351488483 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f1b70866be", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.352 [INFO][5192] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.352 [INFO][5192] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.352 [INFO][5192] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f1b70866be' Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.360 [INFO][5192] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.371 [INFO][5192] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.389 [INFO][5192] ipam/ipam.go 511: Trying affinity for 192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.397 [INFO][5192] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.405 [INFO][5192] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.405 [INFO][5192] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.128/26 handle="k8s-pod-network.61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.411 [INFO][5192] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.419 [INFO][5192] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.128/26 handle="k8s-pod-network.61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.433 [INFO][5192] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.133/26] block=192.168.95.128/26 handle="k8s-pod-network.61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.433 [INFO][5192] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.133/26] handle="k8s-pod-network.61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.433 [INFO][5192] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:44.468007 containerd[1724]: 2026-01-24 00:49:44.433 [INFO][5192] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.133/26] IPv6=[] ContainerID="61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" HandleID="k8s-pod-network.61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:44.469541 containerd[1724]: 2026-01-24 00:49:44.437 [INFO][5166] cni-plugin/k8s.go 418: Populated endpoint ContainerID="61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-lv44h" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0", GenerateName:"calico-apiserver-6956bf9c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5ffb552-fd5c-4233-a4d0-bee61c2df92f", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6956bf9c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"", Pod:"calico-apiserver-6956bf9c49-lv44h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec268dd8931", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:44.469541 containerd[1724]: 2026-01-24 00:49:44.438 [INFO][5166] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.133/32] ContainerID="61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-lv44h" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:44.469541 containerd[1724]: 2026-01-24 00:49:44.438 [INFO][5166] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliec268dd8931 ContainerID="61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-lv44h" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:44.469541 containerd[1724]: 2026-01-24 00:49:44.445 [INFO][5166] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-lv44h" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:44.469541 containerd[1724]: 2026-01-24 00:49:44.446 [INFO][5166] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-lv44h" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0", GenerateName:"calico-apiserver-6956bf9c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5ffb552-fd5c-4233-a4d0-bee61c2df92f", ResourceVersion:"1006", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6956bf9c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea", Pod:"calico-apiserver-6956bf9c49-lv44h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec268dd8931", MAC:"8a:de:9f:d7:22:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:44.469541 containerd[1724]: 2026-01-24 00:49:44.464 [INFO][5166] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea" Namespace="calico-apiserver" Pod="calico-apiserver-6956bf9c49-lv44h" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:44.515259 containerd[1724]: time="2026-01-24T00:49:44.513174338Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:44.515441 containerd[1724]: time="2026-01-24T00:49:44.515071761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:44.515441 containerd[1724]: time="2026-01-24T00:49:44.515096362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:44.515441 containerd[1724]: time="2026-01-24T00:49:44.515205363Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:44.530088 systemd-networkd[1368]: cali79169f50263: Link UP Jan 24 00:49:44.533662 systemd-networkd[1368]: cali79169f50263: Gained carrier Jan 24 00:49:44.558310 systemd[1]: Started cri-containerd-61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea.scope - libcontainer container 61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea. Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.340 [INFO][5178] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0 calico-kube-controllers-7cbf78d979- calico-system f1ccdfdb-e7f0-4061-88d9-d4f165d7633c 1007 0 2026-01-24 00:49:17 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cbf78d979 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-f1b70866be calico-kube-controllers-7cbf78d979-7mqd4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali79169f50263 [] [] }} ContainerID="c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" Namespace="calico-system" Pod="calico-kube-controllers-7cbf78d979-7mqd4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.341 [INFO][5178] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" Namespace="calico-system" Pod="calico-kube-controllers-7cbf78d979-7mqd4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.414 [INFO][5202] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" HandleID="k8s-pod-network.c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.417 [INFO][5202] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" HandleID="k8s-pod-network.c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cb830), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f1b70866be", "pod":"calico-kube-controllers-7cbf78d979-7mqd4", "timestamp":"2026-01-24 00:49:44.414796949 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f1b70866be", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.417 [INFO][5202] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.433 [INFO][5202] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.434 [INFO][5202] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f1b70866be' Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.461 [INFO][5202] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.475 [INFO][5202] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.480 [INFO][5202] ipam/ipam.go 511: Trying affinity for 192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.482 [INFO][5202] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.486 [INFO][5202] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.487 [INFO][5202] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.128/26 handle="k8s-pod-network.c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.494 [INFO][5202] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950 Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.504 [INFO][5202] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.128/26 handle="k8s-pod-network.c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.517 [INFO][5202] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.134/26] block=192.168.95.128/26 handle="k8s-pod-network.c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.517 [INFO][5202] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.134/26] handle="k8s-pod-network.c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.517 [INFO][5202] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:44.568913 containerd[1724]: 2026-01-24 00:49:44.517 [INFO][5202] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.134/26] IPv6=[] ContainerID="c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" HandleID="k8s-pod-network.c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:44.569783 containerd[1724]: 2026-01-24 00:49:44.521 [INFO][5178] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" Namespace="calico-system" Pod="calico-kube-controllers-7cbf78d979-7mqd4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0", GenerateName:"calico-kube-controllers-7cbf78d979-", Namespace:"calico-system", SelfLink:"", UID:"f1ccdfdb-e7f0-4061-88d9-d4f165d7633c", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cbf78d979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"", Pod:"calico-kube-controllers-7cbf78d979-7mqd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79169f50263", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:44.569783 containerd[1724]: 2026-01-24 00:49:44.521 [INFO][5178] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.134/32] ContainerID="c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" Namespace="calico-system" Pod="calico-kube-controllers-7cbf78d979-7mqd4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:44.569783 containerd[1724]: 2026-01-24 00:49:44.521 [INFO][5178] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79169f50263 ContainerID="c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" Namespace="calico-system" Pod="calico-kube-controllers-7cbf78d979-7mqd4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:44.569783 containerd[1724]: 2026-01-24 00:49:44.536 [INFO][5178] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" Namespace="calico-system" Pod="calico-kube-controllers-7cbf78d979-7mqd4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:44.569783 containerd[1724]: 2026-01-24 00:49:44.538 [INFO][5178] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" Namespace="calico-system" Pod="calico-kube-controllers-7cbf78d979-7mqd4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0", GenerateName:"calico-kube-controllers-7cbf78d979-", Namespace:"calico-system", SelfLink:"", UID:"f1ccdfdb-e7f0-4061-88d9-d4f165d7633c", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cbf78d979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950", Pod:"calico-kube-controllers-7cbf78d979-7mqd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79169f50263", MAC:"aa:26:98:fd:a2:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:44.569783 containerd[1724]: 2026-01-24 00:49:44.566 [INFO][5178] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950" Namespace="calico-system" Pod="calico-kube-controllers-7cbf78d979-7mqd4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:44.615103 containerd[1724]: time="2026-01-24T00:49:44.614822068Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:44.615379 containerd[1724]: time="2026-01-24T00:49:44.615136872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:44.615484 containerd[1724]: time="2026-01-24T00:49:44.615415175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:44.616270 containerd[1724]: time="2026-01-24T00:49:44.616177784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:44.651552 systemd[1]: Started cri-containerd-c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950.scope - libcontainer container c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950. Jan 24 00:49:44.719132 containerd[1724]: time="2026-01-24T00:49:44.719072529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6956bf9c49-lv44h,Uid:c5ffb552-fd5c-4233-a4d0-bee61c2df92f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea\"" Jan 24 00:49:44.721554 containerd[1724]: time="2026-01-24T00:49:44.721518558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:49:44.760942 containerd[1724]: time="2026-01-24T00:49:44.760897834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cbf78d979-7mqd4,Uid:f1ccdfdb-e7f0-4061-88d9-d4f165d7633c,Namespace:calico-system,Attempt:1,} returns sandbox id \"c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950\"" Jan 24 00:49:44.883411 systemd-networkd[1368]: cali69e8c05b64b: Gained IPv6LL Jan 24 00:49:44.994462 containerd[1724]: time="2026-01-24T00:49:44.994405558Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:44.997529 containerd[1724]: time="2026-01-24T00:49:44.997469096Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:49:44.997742 containerd[1724]: time="2026-01-24T00:49:44.997494996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:49:44.997804 kubelet[3286]: E0124 00:49:44.997716 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:49:44.997804 kubelet[3286]: E0124 00:49:44.997791 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:49:44.998129 kubelet[3286]: E0124 00:49:44.998066 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg2lz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6956bf9c49-lv44h_calico-apiserver(c5ffb552-fd5c-4233-a4d0-bee61c2df92f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:44.998724 containerd[1724]: time="2026-01-24T00:49:44.998684210Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:49:44.999555 kubelet[3286]: E0124 00:49:44.999488 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:49:45.066529 containerd[1724]: time="2026-01-24T00:49:45.066485830Z" level=info msg="StopPodSandbox for \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\"" Jan 24 00:49:45.068280 containerd[1724]: time="2026-01-24T00:49:45.067154138Z" level=info msg="StopPodSandbox for \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\"" Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.132 [INFO][5322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.134 [INFO][5322] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" iface="eth0" netns="/var/run/netns/cni-1fea2bb3-07ad-af54-91a9-d39e54195554" Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.134 [INFO][5322] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" iface="eth0" netns="/var/run/netns/cni-1fea2bb3-07ad-af54-91a9-d39e54195554" Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.137 [INFO][5322] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" iface="eth0" netns="/var/run/netns/cni-1fea2bb3-07ad-af54-91a9-d39e54195554" Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.137 [INFO][5322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.137 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.179 [INFO][5341] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" HandleID="k8s-pod-network.2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.181 [INFO][5341] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.181 [INFO][5341] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.197 [WARNING][5341] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" HandleID="k8s-pod-network.2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.197 [INFO][5341] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" HandleID="k8s-pod-network.2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.201 [INFO][5341] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:45.207159 containerd[1724]: 2026-01-24 00:49:45.204 [INFO][5322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:45.213098 containerd[1724]: time="2026-01-24T00:49:45.207490636Z" level=info msg="TearDown network for sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\" successfully" Jan 24 00:49:45.213098 containerd[1724]: time="2026-01-24T00:49:45.207542136Z" level=info msg="StopPodSandbox for \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\" returns successfully" Jan 24 00:49:45.213098 containerd[1724]: time="2026-01-24T00:49:45.209266857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw7t4,Uid:11ebe218-698b-4f81-b5c6-5227731a6439,Namespace:calico-system,Attempt:1,}" Jan 24 00:49:45.213884 systemd[1]: run-netns-cni\x2d1fea2bb3\x2d07ad\x2daf54\x2d91a9\x2dd39e54195554.mount: Deactivated successfully. Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.146 [INFO][5331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.146 [INFO][5331] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" iface="eth0" netns="/var/run/netns/cni-55629653-bd81-bb79-6001-7769b0ab7850" Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.147 [INFO][5331] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" iface="eth0" netns="/var/run/netns/cni-55629653-bd81-bb79-6001-7769b0ab7850" Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.147 [INFO][5331] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" iface="eth0" netns="/var/run/netns/cni-55629653-bd81-bb79-6001-7769b0ab7850" Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.147 [INFO][5331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.147 [INFO][5331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.188 [INFO][5346] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" HandleID="k8s-pod-network.e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.189 [INFO][5346] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.201 [INFO][5346] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.215 [WARNING][5346] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" HandleID="k8s-pod-network.e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.215 [INFO][5346] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" HandleID="k8s-pod-network.e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.219 [INFO][5346] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:45.223821 containerd[1724]: 2026-01-24 00:49:45.222 [INFO][5331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:45.224861 containerd[1724]: time="2026-01-24T00:49:45.224339439Z" level=info msg="TearDown network for sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\" successfully" Jan 24 00:49:45.224861 containerd[1724]: time="2026-01-24T00:49:45.224365440Z" level=info msg="StopPodSandbox for \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\" returns successfully" Jan 24 00:49:45.225521 containerd[1724]: time="2026-01-24T00:49:45.225260951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxr9t,Uid:11df6f13-f663-4026-809b-f40550f91486,Namespace:kube-system,Attempt:1,}" Jan 24 00:49:45.228919 systemd[1]: run-netns-cni\x2d55629653\x2dbd81\x2dbb79\x2d6001\x2d7769b0ab7850.mount: Deactivated successfully. Jan 24 00:49:45.269919 containerd[1724]: time="2026-01-24T00:49:45.269886590Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:45.286781 containerd[1724]: time="2026-01-24T00:49:45.286390790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:49:45.286781 containerd[1724]: time="2026-01-24T00:49:45.286654193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:49:45.286938 kubelet[3286]: E0124 00:49:45.286819 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:49:45.286938 kubelet[3286]: E0124 00:49:45.286872 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:49:45.287460 kubelet[3286]: E0124 00:49:45.287075 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw48x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cbf78d979-7mqd4_calico-system(f1ccdfdb-e7f0-4061-88d9-d4f165d7633c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:45.288436 kubelet[3286]: E0124 00:49:45.288397 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:49:45.315713 kubelet[3286]: E0124 00:49:45.315464 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:49:45.322565 kubelet[3286]: E0124 00:49:45.321853 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:49:45.459415 systemd-networkd[1368]: caliec268dd8931: Gained IPv6LL Jan 24 00:49:45.481690 systemd-networkd[1368]: cali169ea72dc39: Link UP Jan 24 00:49:45.482553 systemd-networkd[1368]: cali169ea72dc39: Gained carrier Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.361 [INFO][5356] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0 csi-node-driver- calico-system 11ebe218-698b-4f81-b5c6-5227731a6439 1038 0 2026-01-24 00:49:17 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-f1b70866be csi-node-driver-hw7t4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali169ea72dc39 [] [] }} ContainerID="df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" Namespace="calico-system" Pod="csi-node-driver-hw7t4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.361 [INFO][5356] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" Namespace="calico-system" Pod="csi-node-driver-hw7t4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.429 [INFO][5383] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" HandleID="k8s-pod-network.df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.429 [INFO][5383] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" HandleID="k8s-pod-network.df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003321b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-f1b70866be", "pod":"csi-node-driver-hw7t4", "timestamp":"2026-01-24 00:49:45.429192917 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f1b70866be", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.429 [INFO][5383] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.429 [INFO][5383] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.429 [INFO][5383] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f1b70866be' Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.436 [INFO][5383] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.440 [INFO][5383] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.444 [INFO][5383] ipam/ipam.go 511: Trying affinity for 192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.445 [INFO][5383] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.447 [INFO][5383] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.447 [INFO][5383] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.128/26 handle="k8s-pod-network.df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.448 [INFO][5383] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83 Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.456 [INFO][5383] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.128/26 handle="k8s-pod-network.df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.470 [INFO][5383] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.135/26] block=192.168.95.128/26 handle="k8s-pod-network.df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.470 [INFO][5383] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.135/26] handle="k8s-pod-network.df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.470 [INFO][5383] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:45.510110 containerd[1724]: 2026-01-24 00:49:45.470 [INFO][5383] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.135/26] IPv6=[] ContainerID="df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" HandleID="k8s-pod-network.df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:45.511035 containerd[1724]: 2026-01-24 00:49:45.474 [INFO][5356] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" Namespace="calico-system" Pod="csi-node-driver-hw7t4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11ebe218-698b-4f81-b5c6-5227731a6439", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"", Pod:"csi-node-driver-hw7t4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali169ea72dc39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:45.511035 containerd[1724]: 2026-01-24 00:49:45.474 [INFO][5356] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.135/32] ContainerID="df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" Namespace="calico-system" Pod="csi-node-driver-hw7t4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:45.511035 containerd[1724]: 2026-01-24 00:49:45.474 [INFO][5356] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali169ea72dc39 ContainerID="df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" Namespace="calico-system" Pod="csi-node-driver-hw7t4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:45.511035 containerd[1724]: 2026-01-24 00:49:45.483 [INFO][5356] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" Namespace="calico-system" Pod="csi-node-driver-hw7t4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:45.511035 containerd[1724]: 2026-01-24 00:49:45.484 [INFO][5356] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" Namespace="calico-system" Pod="csi-node-driver-hw7t4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11ebe218-698b-4f81-b5c6-5227731a6439", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83", Pod:"csi-node-driver-hw7t4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali169ea72dc39", MAC:"ce:db:ba:f6:e3:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:45.511035 containerd[1724]: 2026-01-24 00:49:45.505 [INFO][5356] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83" Namespace="calico-system" Pod="csi-node-driver-hw7t4" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:45.540405 containerd[1724]: time="2026-01-24T00:49:45.540288761Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:45.540405 containerd[1724]: time="2026-01-24T00:49:45.540394062Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:45.540566 containerd[1724]: time="2026-01-24T00:49:45.540423662Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:45.540613 containerd[1724]: time="2026-01-24T00:49:45.540577064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:45.568332 systemd[1]: Started cri-containerd-df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83.scope - libcontainer container df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83. Jan 24 00:49:45.587268 systemd-networkd[1368]: cali79169f50263: Gained IPv6LL Jan 24 00:49:45.598677 systemd-networkd[1368]: calidee97ac7098: Link UP Jan 24 00:49:45.605116 systemd-networkd[1368]: calidee97ac7098: Gained carrier Jan 24 00:49:45.625415 containerd[1724]: time="2026-01-24T00:49:45.625370790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-hw7t4,Uid:11ebe218-698b-4f81-b5c6-5227731a6439,Namespace:calico-system,Attempt:1,} returns sandbox id \"df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83\"" Jan 24 00:49:45.632910 containerd[1724]: time="2026-01-24T00:49:45.631645965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.383 [INFO][5366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0 coredns-674b8bbfcf- kube-system 11df6f13-f663-4026-809b-f40550f91486 1039 0 2026-01-24 00:49:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-f1b70866be coredns-674b8bbfcf-rxr9t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calidee97ac7098 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxr9t" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.383 [INFO][5366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxr9t" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.432 [INFO][5389] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" HandleID="k8s-pod-network.15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.432 [INFO][5389] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" HandleID="k8s-pod-network.15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5680), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-f1b70866be", "pod":"coredns-674b8bbfcf-rxr9t", "timestamp":"2026-01-24 00:49:45.432821461 +0000 UTC"}, Hostname:"ci-4081.3.6-n-f1b70866be", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.433 [INFO][5389] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.470 [INFO][5389] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.470 [INFO][5389] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-f1b70866be' Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.537 [INFO][5389] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.543 [INFO][5389] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.548 [INFO][5389] ipam/ipam.go 511: Trying affinity for 192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.550 [INFO][5389] ipam/ipam.go 158: Attempting to load block cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.559 [INFO][5389] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.95.128/26 host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.559 [INFO][5389] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.95.128/26 handle="k8s-pod-network.15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.565 [INFO][5389] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.573 [INFO][5389] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.95.128/26 handle="k8s-pod-network.15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.583 [INFO][5389] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.95.136/26] block=192.168.95.128/26 handle="k8s-pod-network.15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.583 [INFO][5389] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.95.136/26] handle="k8s-pod-network.15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" host="ci-4081.3.6-n-f1b70866be" Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.583 [INFO][5389] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:45.633455 containerd[1724]: 2026-01-24 00:49:45.583 [INFO][5389] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.95.136/26] IPv6=[] ContainerID="15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" HandleID="k8s-pod-network.15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:45.635283 containerd[1724]: 2026-01-24 00:49:45.585 [INFO][5366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxr9t" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"11df6f13-f663-4026-809b-f40550f91486", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"", Pod:"coredns-674b8bbfcf-rxr9t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidee97ac7098", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:45.635283 containerd[1724]: 2026-01-24 00:49:45.589 [INFO][5366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.95.136/32] ContainerID="15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxr9t" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:45.635283 containerd[1724]: 2026-01-24 00:49:45.589 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidee97ac7098 ContainerID="15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxr9t" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:45.635283 containerd[1724]: 2026-01-24 00:49:45.608 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxr9t" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:45.635283 containerd[1724]: 2026-01-24 00:49:45.610 [INFO][5366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxr9t" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"11df6f13-f663-4026-809b-f40550f91486", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe", Pod:"coredns-674b8bbfcf-rxr9t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidee97ac7098", MAC:"4e:3c:06:b0:af:13", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:45.635283 containerd[1724]: 2026-01-24 00:49:45.629 [INFO][5366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe" Namespace="kube-system" Pod="coredns-674b8bbfcf-rxr9t" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:45.675289 containerd[1724]: time="2026-01-24T00:49:45.675193492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:45.675450 containerd[1724]: time="2026-01-24T00:49:45.675269493Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:45.675450 containerd[1724]: time="2026-01-24T00:49:45.675292593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:45.675450 containerd[1724]: time="2026-01-24T00:49:45.675381994Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:45.694320 systemd[1]: Started cri-containerd-15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe.scope - libcontainer container 15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe. Jan 24 00:49:45.741622 containerd[1724]: time="2026-01-24T00:49:45.741583395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rxr9t,Uid:11df6f13-f663-4026-809b-f40550f91486,Namespace:kube-system,Attempt:1,} returns sandbox id \"15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe\"" Jan 24 00:49:45.750043 containerd[1724]: time="2026-01-24T00:49:45.749950996Z" level=info msg="CreateContainer within sandbox \"15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:49:45.789882 containerd[1724]: time="2026-01-24T00:49:45.789849779Z" level=info msg="CreateContainer within sandbox \"15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f157c9350cefd55b9264756b80766ac5883e5781aa48de99a12f39b268a9156\"" Jan 24 00:49:45.790792 containerd[1724]: time="2026-01-24T00:49:45.790492487Z" level=info msg="StartContainer for \"8f157c9350cefd55b9264756b80766ac5883e5781aa48de99a12f39b268a9156\"" Jan 24 00:49:45.816297 systemd[1]: Started cri-containerd-8f157c9350cefd55b9264756b80766ac5883e5781aa48de99a12f39b268a9156.scope - libcontainer container 8f157c9350cefd55b9264756b80766ac5883e5781aa48de99a12f39b268a9156. Jan 24 00:49:45.846627 containerd[1724]: time="2026-01-24T00:49:45.846506164Z" level=info msg="StartContainer for \"8f157c9350cefd55b9264756b80766ac5883e5781aa48de99a12f39b268a9156\" returns successfully" Jan 24 00:49:45.908756 containerd[1724]: time="2026-01-24T00:49:45.908704916Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:45.912358 containerd[1724]: time="2026-01-24T00:49:45.912231759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:49:45.912626 containerd[1724]: time="2026-01-24T00:49:45.912294660Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:49:45.912824 kubelet[3286]: E0124 00:49:45.912786 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:49:45.912901 kubelet[3286]: E0124 00:49:45.912839 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:49:45.913038 kubelet[3286]: E0124 00:49:45.912998 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4kj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hw7t4_calico-system(11ebe218-698b-4f81-b5c6-5227731a6439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:45.915386 containerd[1724]: time="2026-01-24T00:49:45.915362497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:49:46.176170 containerd[1724]: time="2026-01-24T00:49:46.174979137Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:46.180842 containerd[1724]: time="2026-01-24T00:49:46.180738006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:49:46.180842 containerd[1724]: time="2026-01-24T00:49:46.180784107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:49:46.182223 kubelet[3286]: E0124 00:49:46.181283 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:49:46.182223 kubelet[3286]: E0124 00:49:46.181334 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:49:46.182223 kubelet[3286]: E0124 00:49:46.181473 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4kj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hw7t4_calico-system(11ebe218-698b-4f81-b5c6-5227731a6439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:46.182645 kubelet[3286]: E0124 00:49:46.182606 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:49:46.328187 kubelet[3286]: E0124 00:49:46.327662 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:49:46.328187 kubelet[3286]: E0124 00:49:46.327872 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:49:46.329415 kubelet[3286]: E0124 00:49:46.329195 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:49:46.340990 kubelet[3286]: I0124 00:49:46.340940 3286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rxr9t" podStartSLOduration=44.340923044 podStartE2EDuration="44.340923044s" podCreationTimestamp="2026-01-24 00:49:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:49:46.339064021 +0000 UTC m=+50.389208677" watchObservedRunningTime="2026-01-24 00:49:46.340923044 +0000 UTC m=+50.391067700" Jan 24 00:49:47.251400 systemd-networkd[1368]: cali169ea72dc39: Gained IPv6LL Jan 24 00:49:47.329414 kubelet[3286]: E0124 00:49:47.329367 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:49:47.635509 systemd-networkd[1368]: calidee97ac7098: Gained IPv6LL Jan 24 00:49:52.068631 containerd[1724]: time="2026-01-24T00:49:52.068388772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:49:52.333189 containerd[1724]: time="2026-01-24T00:49:52.332862778Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:52.339828 containerd[1724]: time="2026-01-24T00:49:52.339715862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:49:52.340053 containerd[1724]: time="2026-01-24T00:49:52.339898064Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:49:52.340099 kubelet[3286]: E0124 00:49:52.340062 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:49:52.340483 kubelet[3286]: E0124 00:49:52.340106 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:49:52.340483 kubelet[3286]: E0124 00:49:52.340266 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a7e553b4077f468cbd98dfe225bad7bf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9m6mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66577d4d7-zhfd5_calico-system(d86b5148-8637-436b-954a-278a7b8ba7a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:52.343342 containerd[1724]: time="2026-01-24T00:49:52.343286505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:49:52.612976 containerd[1724]: time="2026-01-24T00:49:52.612931374Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:52.616579 containerd[1724]: time="2026-01-24T00:49:52.616527018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:49:52.616698 containerd[1724]: time="2026-01-24T00:49:52.616630719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:49:52.616873 kubelet[3286]: E0124 00:49:52.616831 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:49:52.616962 kubelet[3286]: E0124 00:49:52.616886 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:49:52.617078 kubelet[3286]: E0124 00:49:52.617037 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9m6mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66577d4d7-zhfd5_calico-system(d86b5148-8637-436b-954a-278a7b8ba7a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:52.618618 kubelet[3286]: E0124 00:49:52.618572 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:49:55.066707 containerd[1724]: time="2026-01-24T00:49:55.066327421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:49:55.328627 containerd[1724]: time="2026-01-24T00:49:55.328346398Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:55.333709 containerd[1724]: time="2026-01-24T00:49:55.333657963Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:49:55.333835 containerd[1724]: time="2026-01-24T00:49:55.333750164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:49:55.333964 kubelet[3286]: E0124 00:49:55.333923 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:49:55.333964 kubelet[3286]: E0124 00:49:55.333975 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:49:55.334433 kubelet[3286]: E0124 00:49:55.334167 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z72nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-665v9_calico-system(a99a2457-cb1f-40ec-b343-9e0ed2df6091): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:55.335819 kubelet[3286]: E0124 00:49:55.335767 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:49:56.043733 containerd[1724]: time="2026-01-24T00:49:56.043369768Z" level=info msg="StopPodSandbox for \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\"" Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.080 [WARNING][5566] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0", GenerateName:"calico-kube-controllers-7cbf78d979-", Namespace:"calico-system", SelfLink:"", UID:"f1ccdfdb-e7f0-4061-88d9-d4f165d7633c", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cbf78d979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950", Pod:"calico-kube-controllers-7cbf78d979-7mqd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79169f50263", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.080 [INFO][5566] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.080 [INFO][5566] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" iface="eth0" netns="" Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.080 [INFO][5566] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.080 [INFO][5566] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.103 [INFO][5575] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" HandleID="k8s-pod-network.d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.103 [INFO][5575] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.103 [INFO][5575] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.108 [WARNING][5575] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" HandleID="k8s-pod-network.d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.109 [INFO][5575] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" HandleID="k8s-pod-network.d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.110 [INFO][5575] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.113751 containerd[1724]: 2026-01-24 00:49:56.111 [INFO][5566] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:56.115222 containerd[1724]: time="2026-01-24T00:49:56.113788122Z" level=info msg="TearDown network for sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\" successfully" Jan 24 00:49:56.115222 containerd[1724]: time="2026-01-24T00:49:56.113813422Z" level=info msg="StopPodSandbox for \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\" returns successfully" Jan 24 00:49:56.115222 containerd[1724]: time="2026-01-24T00:49:56.114374329Z" level=info msg="RemovePodSandbox for \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\"" Jan 24 00:49:56.115222 containerd[1724]: time="2026-01-24T00:49:56.114664732Z" level=info msg="Forcibly stopping sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\"" Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.150 [WARNING][5589] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0", GenerateName:"calico-kube-controllers-7cbf78d979-", Namespace:"calico-system", SelfLink:"", UID:"f1ccdfdb-e7f0-4061-88d9-d4f165d7633c", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cbf78d979", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"c9ae165bb4632627309b212c7f7ec1c7fa121c1587711e5a81c9ab6ec6d80950", Pod:"calico-kube-controllers-7cbf78d979-7mqd4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.95.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali79169f50263", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.150 [INFO][5589] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.150 [INFO][5589] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" iface="eth0" netns="" Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.150 [INFO][5589] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.150 [INFO][5589] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.169 [INFO][5597] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" HandleID="k8s-pod-network.d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.170 [INFO][5597] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.170 [INFO][5597] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.175 [WARNING][5597] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" HandleID="k8s-pod-network.d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.175 [INFO][5597] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" HandleID="k8s-pod-network.d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--kube--controllers--7cbf78d979--7mqd4-eth0" Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.177 [INFO][5597] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.179734 containerd[1724]: 2026-01-24 00:49:56.178 [INFO][5589] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809" Jan 24 00:49:56.180398 containerd[1724]: time="2026-01-24T00:49:56.179787322Z" level=info msg="TearDown network for sandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\" successfully" Jan 24 00:49:56.195924 containerd[1724]: time="2026-01-24T00:49:56.195886217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:56.196063 containerd[1724]: time="2026-01-24T00:49:56.195952818Z" level=info msg="RemovePodSandbox \"d12c35a3cf37ceba083491498ec3a8eb29622dfe1526998b27674e29d1de1809\" returns successfully" Jan 24 00:49:56.196493 containerd[1724]: time="2026-01-24T00:49:56.196459224Z" level=info msg="StopPodSandbox for \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\"" Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.229 [WARNING][5612] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11ebe218-698b-4f81-b5c6-5227731a6439", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83", Pod:"csi-node-driver-hw7t4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali169ea72dc39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.229 [INFO][5612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.229 [INFO][5612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" iface="eth0" netns="" Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.229 [INFO][5612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.229 [INFO][5612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.259 [INFO][5619] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" HandleID="k8s-pod-network.2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.259 [INFO][5619] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.259 [INFO][5619] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.266 [WARNING][5619] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" HandleID="k8s-pod-network.2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.266 [INFO][5619] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" HandleID="k8s-pod-network.2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.268 [INFO][5619] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.271354 containerd[1724]: 2026-01-24 00:49:56.269 [INFO][5612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:56.271354 containerd[1724]: time="2026-01-24T00:49:56.271299932Z" level=info msg="TearDown network for sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\" successfully" Jan 24 00:49:56.271354 containerd[1724]: time="2026-01-24T00:49:56.271321132Z" level=info msg="StopPodSandbox for \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\" returns successfully" Jan 24 00:49:56.272221 containerd[1724]: time="2026-01-24T00:49:56.272191542Z" level=info msg="RemovePodSandbox for \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\"" Jan 24 00:49:56.272337 containerd[1724]: time="2026-01-24T00:49:56.272223643Z" level=info msg="Forcibly stopping sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\"" Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.302 [WARNING][5633] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"11ebe218-698b-4f81-b5c6-5227731a6439", ResourceVersion:"1091", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"df1128953454b9fe6ad99bf23d2aa1963d4c123f50600261a682cee2eac60a83", Pod:"csi-node-driver-hw7t4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.95.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali169ea72dc39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.303 [INFO][5633] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.303 [INFO][5633] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" iface="eth0" netns="" Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.303 [INFO][5633] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.303 [INFO][5633] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.321 [INFO][5640] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" HandleID="k8s-pod-network.2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.321 [INFO][5640] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.321 [INFO][5640] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.327 [WARNING][5640] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" HandleID="k8s-pod-network.2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.327 [INFO][5640] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" HandleID="k8s-pod-network.2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Workload="ci--4081.3.6--n--f1b70866be-k8s-csi--node--driver--hw7t4-eth0" Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.328 [INFO][5640] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.331036 containerd[1724]: 2026-01-24 00:49:56.329 [INFO][5633] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6" Jan 24 00:49:56.333431 containerd[1724]: time="2026-01-24T00:49:56.331078556Z" level=info msg="TearDown network for sandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\" successfully" Jan 24 00:49:56.339100 containerd[1724]: time="2026-01-24T00:49:56.339058753Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:56.339263 containerd[1724]: time="2026-01-24T00:49:56.339121254Z" level=info msg="RemovePodSandbox \"2bff2a71ee6887e8ae714e573eff6d6de703ad880494ad51642899afcb5c9fe6\" returns successfully" Jan 24 00:49:56.339712 containerd[1724]: time="2026-01-24T00:49:56.339688161Z" level=info msg="StopPodSandbox for \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\"" Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.373 [WARNING][5654] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0", GenerateName:"calico-apiserver-6956bf9c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6956bf9c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf", Pod:"calico-apiserver-6956bf9c49-kjw59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6288898b4d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.373 [INFO][5654] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.373 [INFO][5654] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" iface="eth0" netns="" Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.373 [INFO][5654] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.373 [INFO][5654] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.392 [INFO][5661] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" HandleID="k8s-pod-network.0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.392 [INFO][5661] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.392 [INFO][5661] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.398 [WARNING][5661] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" HandleID="k8s-pod-network.0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.398 [INFO][5661] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" HandleID="k8s-pod-network.0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.399 [INFO][5661] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.402151 containerd[1724]: 2026-01-24 00:49:56.400 [INFO][5654] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:56.402765 containerd[1724]: time="2026-01-24T00:49:56.402193219Z" level=info msg="TearDown network for sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\" successfully" Jan 24 00:49:56.402765 containerd[1724]: time="2026-01-24T00:49:56.402226319Z" level=info msg="StopPodSandbox for \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\" returns successfully" Jan 24 00:49:56.402914 containerd[1724]: time="2026-01-24T00:49:56.402872127Z" level=info msg="RemovePodSandbox for \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\"" Jan 24 00:49:56.402977 containerd[1724]: time="2026-01-24T00:49:56.402921428Z" level=info msg="Forcibly stopping sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\"" Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.434 [WARNING][5675] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0", GenerateName:"calico-apiserver-6956bf9c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6956bf9c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"7f13f097a3124fc5fe95e9f7d8570a825b744f69bb32c7ec7f9241a5d58377cf", Pod:"calico-apiserver-6956bf9c49-kjw59", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6288898b4d2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.435 [INFO][5675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.435 [INFO][5675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" iface="eth0" netns="" Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.435 [INFO][5675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.435 [INFO][5675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.457 [INFO][5682] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" HandleID="k8s-pod-network.0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.457 [INFO][5682] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.457 [INFO][5682] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.463 [WARNING][5682] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" HandleID="k8s-pod-network.0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.463 [INFO][5682] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" HandleID="k8s-pod-network.0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--kjw59-eth0" Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.465 [INFO][5682] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.467400 containerd[1724]: 2026-01-24 00:49:56.466 [INFO][5675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b" Jan 24 00:49:56.468097 containerd[1724]: time="2026-01-24T00:49:56.467440010Z" level=info msg="TearDown network for sandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\" successfully" Jan 24 00:49:56.476060 containerd[1724]: time="2026-01-24T00:49:56.476019114Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:56.476180 containerd[1724]: time="2026-01-24T00:49:56.476075215Z" level=info msg="RemovePodSandbox \"0a20bb33a36ef369959c06792a259229bb35528fbf49f328db9b3e8b3e961d3b\" returns successfully" Jan 24 00:49:56.476588 containerd[1724]: time="2026-01-24T00:49:56.476556820Z" level=info msg="StopPodSandbox for \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\"" Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.506 [WARNING][5697] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-whisker--cd866fd95--t6ls8-eth0" Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.506 [INFO][5697] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.506 [INFO][5697] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" iface="eth0" netns="" Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.506 [INFO][5697] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.506 [INFO][5697] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.528 [INFO][5704] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" HandleID="k8s-pod-network.5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--cd866fd95--t6ls8-eth0" Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.528 [INFO][5704] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.528 [INFO][5704] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.534 [WARNING][5704] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" HandleID="k8s-pod-network.5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--cd866fd95--t6ls8-eth0" Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.534 [INFO][5704] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" HandleID="k8s-pod-network.5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--cd866fd95--t6ls8-eth0" Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.535 [INFO][5704] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.538703 containerd[1724]: 2026-01-24 00:49:56.537 [INFO][5697] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:56.539380 containerd[1724]: time="2026-01-24T00:49:56.538767875Z" level=info msg="TearDown network for sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\" successfully" Jan 24 00:49:56.539380 containerd[1724]: time="2026-01-24T00:49:56.538797575Z" level=info msg="StopPodSandbox for \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\" returns successfully" Jan 24 00:49:56.539467 containerd[1724]: time="2026-01-24T00:49:56.539441783Z" level=info msg="RemovePodSandbox for \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\"" Jan 24 00:49:56.539504 containerd[1724]: time="2026-01-24T00:49:56.539475083Z" level=info msg="Forcibly stopping sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\"" Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.571 [WARNING][5718] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" WorkloadEndpoint="ci--4081.3.6--n--f1b70866be-k8s-whisker--cd866fd95--t6ls8-eth0" Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.571 [INFO][5718] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.571 [INFO][5718] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" iface="eth0" netns="" Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.571 [INFO][5718] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.571 [INFO][5718] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.592 [INFO][5725] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" HandleID="k8s-pod-network.5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--cd866fd95--t6ls8-eth0" Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.592 [INFO][5725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.592 [INFO][5725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.598 [WARNING][5725] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" HandleID="k8s-pod-network.5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--cd866fd95--t6ls8-eth0" Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.598 [INFO][5725] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" HandleID="k8s-pod-network.5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Workload="ci--4081.3.6--n--f1b70866be-k8s-whisker--cd866fd95--t6ls8-eth0" Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.600 [INFO][5725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.604058 containerd[1724]: 2026-01-24 00:49:56.601 [INFO][5718] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c" Jan 24 00:49:56.604058 containerd[1724]: time="2026-01-24T00:49:56.602547848Z" level=info msg="TearDown network for sandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\" successfully" Jan 24 00:49:56.611514 containerd[1724]: time="2026-01-24T00:49:56.611355455Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:56.611514 containerd[1724]: time="2026-01-24T00:49:56.611416856Z" level=info msg="RemovePodSandbox \"5b481fca227eecd5a028f025f40fe15619d36c27bc2c0b84afb5c211b577d64c\" returns successfully" Jan 24 00:49:56.611999 containerd[1724]: time="2026-01-24T00:49:56.611952162Z" level=info msg="StopPodSandbox for \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\"" Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.643 [WARNING][5740] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a99a2457-cb1f-40ec-b343-9e0ed2df6091", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180", Pod:"goldmane-666569f655-665v9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e48413f1ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.643 [INFO][5740] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.643 [INFO][5740] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" iface="eth0" netns="" Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.643 [INFO][5740] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.643 [INFO][5740] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.663 [INFO][5747] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" HandleID="k8s-pod-network.86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.663 [INFO][5747] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.663 [INFO][5747] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.669 [WARNING][5747] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" HandleID="k8s-pod-network.86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.669 [INFO][5747] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" HandleID="k8s-pod-network.86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.670 [INFO][5747] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.673064 containerd[1724]: 2026-01-24 00:49:56.671 [INFO][5740] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:56.673672 containerd[1724]: time="2026-01-24T00:49:56.673107804Z" level=info msg="TearDown network for sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\" successfully" Jan 24 00:49:56.673672 containerd[1724]: time="2026-01-24T00:49:56.673139204Z" level=info msg="StopPodSandbox for \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\" returns successfully" Jan 24 00:49:56.673756 containerd[1724]: time="2026-01-24T00:49:56.673736511Z" level=info msg="RemovePodSandbox for \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\"" Jan 24 00:49:56.673795 containerd[1724]: time="2026-01-24T00:49:56.673785312Z" level=info msg="Forcibly stopping sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\"" Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.710 [WARNING][5761] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"a99a2457-cb1f-40ec-b343-9e0ed2df6091", ResourceVersion:"1013", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"2a9895cf745563eba5fc05f7694e38f977babd436fae0a52040d2dcca2fa7180", Pod:"goldmane-666569f655-665v9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.95.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6e48413f1ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.710 [INFO][5761] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.710 [INFO][5761] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" iface="eth0" netns="" Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.710 [INFO][5761] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.710 [INFO][5761] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.731 [INFO][5769] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" HandleID="k8s-pod-network.86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.732 [INFO][5769] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.732 [INFO][5769] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.739 [WARNING][5769] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" HandleID="k8s-pod-network.86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.739 [INFO][5769] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" HandleID="k8s-pod-network.86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Workload="ci--4081.3.6--n--f1b70866be-k8s-goldmane--666569f655--665v9-eth0" Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.740 [INFO][5769] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.742976 containerd[1724]: 2026-01-24 00:49:56.741 [INFO][5761] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5" Jan 24 00:49:56.743820 containerd[1724]: time="2026-01-24T00:49:56.743121752Z" level=info msg="TearDown network for sandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\" successfully" Jan 24 00:49:56.752048 containerd[1724]: time="2026-01-24T00:49:56.752007160Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:56.752172 containerd[1724]: time="2026-01-24T00:49:56.752071461Z" level=info msg="RemovePodSandbox \"86701a37ba34ca5cbd15389e13eb76c71f48f800706e31f4b7aebf7b3c0b1af5\" returns successfully" Jan 24 00:49:56.752678 containerd[1724]: time="2026-01-24T00:49:56.752647068Z" level=info msg="StopPodSandbox for \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\"" Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.788 [WARNING][5784] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0", GenerateName:"calico-apiserver-6956bf9c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5ffb552-fd5c-4233-a4d0-bee61c2df92f", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6956bf9c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea", Pod:"calico-apiserver-6956bf9c49-lv44h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec268dd8931", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.788 [INFO][5784] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.788 [INFO][5784] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" iface="eth0" netns="" Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.788 [INFO][5784] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.788 [INFO][5784] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.812 [INFO][5791] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" HandleID="k8s-pod-network.aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.812 [INFO][5791] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.812 [INFO][5791] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.818 [WARNING][5791] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" HandleID="k8s-pod-network.aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.818 [INFO][5791] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" HandleID="k8s-pod-network.aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.826 [INFO][5791] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.829390 containerd[1724]: 2026-01-24 00:49:56.828 [INFO][5784] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:56.830017 containerd[1724]: time="2026-01-24T00:49:56.829424499Z" level=info msg="TearDown network for sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\" successfully" Jan 24 00:49:56.830017 containerd[1724]: time="2026-01-24T00:49:56.829455499Z" level=info msg="StopPodSandbox for \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\" returns successfully" Jan 24 00:49:56.830099 containerd[1724]: time="2026-01-24T00:49:56.830027406Z" level=info msg="RemovePodSandbox for \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\"" Jan 24 00:49:56.830099 containerd[1724]: time="2026-01-24T00:49:56.830060207Z" level=info msg="Forcibly stopping sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\"" Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.862 [WARNING][5805] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0", GenerateName:"calico-apiserver-6956bf9c49-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5ffb552-fd5c-4233-a4d0-bee61c2df92f", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6956bf9c49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"61d30c1ba8aef5fa5e41ea14f003a17bd3419cd0d131c6af549f2f3fb541b2ea", Pod:"calico-apiserver-6956bf9c49-lv44h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.95.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliec268dd8931", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.862 [INFO][5805] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.862 [INFO][5805] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" iface="eth0" netns="" Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.862 [INFO][5805] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.862 [INFO][5805] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.883 [INFO][5812] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" HandleID="k8s-pod-network.aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.883 [INFO][5812] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.883 [INFO][5812] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.888 [WARNING][5812] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" HandleID="k8s-pod-network.aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.888 [INFO][5812] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" HandleID="k8s-pod-network.aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Workload="ci--4081.3.6--n--f1b70866be-k8s-calico--apiserver--6956bf9c49--lv44h-eth0" Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.890 [INFO][5812] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.892862 containerd[1724]: 2026-01-24 00:49:56.891 [INFO][5805] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d" Jan 24 00:49:56.893587 containerd[1724]: time="2026-01-24T00:49:56.892905369Z" level=info msg="TearDown network for sandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\" successfully" Jan 24 00:49:56.902549 containerd[1724]: time="2026-01-24T00:49:56.902507985Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:56.902682 containerd[1724]: time="2026-01-24T00:49:56.902572286Z" level=info msg="RemovePodSandbox \"aee815283671a8c07ab4dba752bd6d853a6133fe4c90a2e4e82d9e431b66c00d\" returns successfully" Jan 24 00:49:56.903138 containerd[1724]: time="2026-01-24T00:49:56.903108292Z" level=info msg="StopPodSandbox for \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\"" Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.933 [WARNING][5826] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"83de2bc6-05d2-4a24-a80e-44ff528a5b2e", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4", Pod:"coredns-674b8bbfcf-8wvv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69e8c05b64b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.933 [INFO][5826] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.933 [INFO][5826] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" iface="eth0" netns="" Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.933 [INFO][5826] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.933 [INFO][5826] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.952 [INFO][5833] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" HandleID="k8s-pod-network.086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.953 [INFO][5833] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.953 [INFO][5833] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.959 [WARNING][5833] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" HandleID="k8s-pod-network.086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.959 [INFO][5833] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" HandleID="k8s-pod-network.086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.960 [INFO][5833] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:56.962842 containerd[1724]: 2026-01-24 00:49:56.961 [INFO][5826] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:56.963581 containerd[1724]: time="2026-01-24T00:49:56.962892617Z" level=info msg="TearDown network for sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\" successfully" Jan 24 00:49:56.963581 containerd[1724]: time="2026-01-24T00:49:56.962921717Z" level=info msg="StopPodSandbox for \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\" returns successfully" Jan 24 00:49:56.963812 containerd[1724]: time="2026-01-24T00:49:56.963784528Z" level=info msg="RemovePodSandbox for \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\"" Jan 24 00:49:56.963891 containerd[1724]: time="2026-01-24T00:49:56.963821128Z" level=info msg="Forcibly stopping sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\"" Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:56.996 [WARNING][5848] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"83de2bc6-05d2-4a24-a80e-44ff528a5b2e", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"9b540af265ddddeedad7cf9ab532e0bf69e4301656184155342a89b28008dba4", Pod:"coredns-674b8bbfcf-8wvv8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69e8c05b64b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:56.997 [INFO][5848] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:56.997 [INFO][5848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" iface="eth0" netns="" Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:56.997 [INFO][5848] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:56.997 [INFO][5848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:57.019 [INFO][5855] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" HandleID="k8s-pod-network.086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:57.020 [INFO][5855] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:57.020 [INFO][5855] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:57.025 [WARNING][5855] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" HandleID="k8s-pod-network.086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:57.025 [INFO][5855] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" HandleID="k8s-pod-network.086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--8wvv8-eth0" Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:57.027 [INFO][5855] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:57.029866 containerd[1724]: 2026-01-24 00:49:57.028 [INFO][5848] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9" Jan 24 00:49:57.030537 containerd[1724]: time="2026-01-24T00:49:57.029923530Z" level=info msg="TearDown network for sandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\" successfully" Jan 24 00:49:57.038112 containerd[1724]: time="2026-01-24T00:49:57.038070129Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:57.038256 containerd[1724]: time="2026-01-24T00:49:57.038136929Z" level=info msg="RemovePodSandbox \"086b27669a1e9fb45e0d618e830b4c1a750c3c76e913c33feea2656b62c84ec9\" returns successfully" Jan 24 00:49:57.038728 containerd[1724]: time="2026-01-24T00:49:57.038699536Z" level=info msg="StopPodSandbox for \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\"" Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.070 [WARNING][5869] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"11df6f13-f663-4026-809b-f40550f91486", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe", Pod:"coredns-674b8bbfcf-rxr9t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidee97ac7098", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.070 [INFO][5869] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.070 [INFO][5869] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" iface="eth0" netns="" Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.070 [INFO][5869] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.070 [INFO][5869] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.089 [INFO][5876] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" HandleID="k8s-pod-network.e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.089 [INFO][5876] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.089 [INFO][5876] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.095 [WARNING][5876] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" HandleID="k8s-pod-network.e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.095 [INFO][5876] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" HandleID="k8s-pod-network.e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.096 [INFO][5876] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:57.098899 containerd[1724]: 2026-01-24 00:49:57.097 [INFO][5869] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:57.099521 containerd[1724]: time="2026-01-24T00:49:57.098938367Z" level=info msg="TearDown network for sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\" successfully" Jan 24 00:49:57.099521 containerd[1724]: time="2026-01-24T00:49:57.098968967Z" level=info msg="StopPodSandbox for \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\" returns successfully" Jan 24 00:49:57.099764 containerd[1724]: time="2026-01-24T00:49:57.099733176Z" level=info msg="RemovePodSandbox for \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\"" Jan 24 00:49:57.099853 containerd[1724]: time="2026-01-24T00:49:57.099766277Z" level=info msg="Forcibly stopping sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\"" Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.135 [WARNING][5890] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"11df6f13-f663-4026-809b-f40550f91486", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-f1b70866be", ContainerID:"15967aee6f58bb996cd064fa37a11803b6c8efa1cad4bd44c1155e946e47cbbe", Pod:"coredns-674b8bbfcf-rxr9t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.95.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calidee97ac7098", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.136 [INFO][5890] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.136 [INFO][5890] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" iface="eth0" netns="" Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.136 [INFO][5890] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.136 [INFO][5890] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.155 [INFO][5897] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" HandleID="k8s-pod-network.e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.155 [INFO][5897] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.155 [INFO][5897] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.160 [WARNING][5897] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" HandleID="k8s-pod-network.e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.160 [INFO][5897] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" HandleID="k8s-pod-network.e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Workload="ci--4081.3.6--n--f1b70866be-k8s-coredns--674b8bbfcf--rxr9t-eth0" Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.162 [INFO][5897] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:57.166730 containerd[1724]: 2026-01-24 00:49:57.163 [INFO][5890] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be" Jan 24 00:49:57.166730 containerd[1724]: time="2026-01-24T00:49:57.164710464Z" level=info msg="TearDown network for sandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\" successfully" Jan 24 00:49:57.173850 containerd[1724]: time="2026-01-24T00:49:57.173817075Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:57.173955 containerd[1724]: time="2026-01-24T00:49:57.173881675Z" level=info msg="RemovePodSandbox \"e02767b177197686273ed652d9a232a591c4fd34bd52dd56b504be9ba5f560be\" returns successfully" Jan 24 00:49:59.067363 containerd[1724]: time="2026-01-24T00:49:59.066572229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:49:59.333769 containerd[1724]: time="2026-01-24T00:49:59.333606869Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:59.336360 containerd[1724]: time="2026-01-24T00:49:59.336304201Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:49:59.336472 containerd[1724]: time="2026-01-24T00:49:59.336401502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:49:59.336617 kubelet[3286]: E0124 00:49:59.336567 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:49:59.337036 kubelet[3286]: E0124 00:49:59.336620 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:49:59.337036 kubelet[3286]: E0124 00:49:59.336808 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6956bf9c49-kjw59_calico-apiserver(10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:59.338608 kubelet[3286]: E0124 00:49:59.338498 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:50:01.066748 containerd[1724]: time="2026-01-24T00:50:01.066411289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:50:01.499893 containerd[1724]: time="2026-01-24T00:50:01.499841646Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:01.503005 containerd[1724]: time="2026-01-24T00:50:01.502949084Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:50:01.503195 containerd[1724]: time="2026-01-24T00:50:01.502961884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:50:01.503285 kubelet[3286]: E0124 00:50:01.503235 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:50:01.503667 kubelet[3286]: E0124 00:50:01.503298 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:50:01.503667 kubelet[3286]: E0124 00:50:01.503571 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw48x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cbf78d979-7mqd4_calico-system(f1ccdfdb-e7f0-4061-88d9-d4f165d7633c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:01.504470 containerd[1724]: time="2026-01-24T00:50:01.504444402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:50:01.504886 kubelet[3286]: E0124 00:50:01.504819 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:50:01.767431 containerd[1724]: time="2026-01-24T00:50:01.767098988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:01.773704 containerd[1724]: time="2026-01-24T00:50:01.773589767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:50:01.773704 containerd[1724]: time="2026-01-24T00:50:01.773634668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:50:01.774160 kubelet[3286]: E0124 00:50:01.773808 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:01.774160 kubelet[3286]: E0124 00:50:01.774061 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:01.775412 kubelet[3286]: E0124 00:50:01.775364 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg2lz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6956bf9c49-lv44h_calico-apiserver(c5ffb552-fd5c-4233-a4d0-bee61c2df92f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:01.777621 kubelet[3286]: E0124 00:50:01.777556 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:50:02.070234 containerd[1724]: time="2026-01-24T00:50:02.066343918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:50:02.489892 containerd[1724]: time="2026-01-24T00:50:02.489845956Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:02.493423 containerd[1724]: time="2026-01-24T00:50:02.493381299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:50:02.493541 containerd[1724]: time="2026-01-24T00:50:02.493475700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:50:02.493741 kubelet[3286]: E0124 00:50:02.493701 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:50:02.493846 kubelet[3286]: E0124 00:50:02.493753 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:50:02.493948 kubelet[3286]: E0124 00:50:02.493910 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4kj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hw7t4_calico-system(11ebe218-698b-4f81-b5c6-5227731a6439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:02.496707 containerd[1724]: time="2026-01-24T00:50:02.496521037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:50:02.762264 containerd[1724]: time="2026-01-24T00:50:02.762094658Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:02.770432 containerd[1724]: time="2026-01-24T00:50:02.770382459Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:50:02.770568 containerd[1724]: time="2026-01-24T00:50:02.770477460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:50:02.770657 kubelet[3286]: E0124 00:50:02.770617 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:50:02.771039 kubelet[3286]: E0124 00:50:02.770667 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:50:02.771039 kubelet[3286]: E0124 00:50:02.770829 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4kj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hw7t4_calico-system(11ebe218-698b-4f81-b5c6-5227731a6439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:02.772861 kubelet[3286]: E0124 00:50:02.772783 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:50:05.067294 kubelet[3286]: E0124 00:50:05.067029 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:50:08.069135 kubelet[3286]: E0124 00:50:08.068202 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:50:10.121333 waagent[1940]: 2026-01-24T00:50:10.121260Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 24 00:50:10.128110 waagent[1940]: 2026-01-24T00:50:10.128048Z INFO ExtHandler Jan 24 00:50:10.128257 waagent[1940]: 2026-01-24T00:50:10.128197Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c76a1644-d993-4706-a580-2c0f12fc429a eTag: 16817091342554854822 source: Fabric] Jan 24 00:50:10.128602 waagent[1940]: 2026-01-24T00:50:10.128546Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 24 00:50:10.129208 waagent[1940]: 2026-01-24T00:50:10.129134Z INFO ExtHandler Jan 24 00:50:10.129300 waagent[1940]: 2026-01-24T00:50:10.129251Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 24 00:50:10.216392 waagent[1940]: 2026-01-24T00:50:10.216322Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 24 00:50:10.345383 waagent[1940]: 2026-01-24T00:50:10.345301Z INFO ExtHandler Downloaded certificate {'thumbprint': '34FCF90823E0FBBD8A11F8367A9AA4BF493899E0', 'hasPrivateKey': True} Jan 24 00:50:10.345890 waagent[1940]: 2026-01-24T00:50:10.345830Z INFO ExtHandler Fetch goal state completed Jan 24 00:50:10.346298 waagent[1940]: 2026-01-24T00:50:10.346252Z INFO ExtHandler ExtHandler Jan 24 00:50:10.346384 waagent[1940]: 2026-01-24T00:50:10.346343Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: c5593591-4996-4ab3-8241-6c2e053c75b6 correlation 644954bc-62c1-4696-ba25-ea6568dd9b03 created: 2026-01-24T00:50:01.013885Z] Jan 24 00:50:10.346717 waagent[1940]: 2026-01-24T00:50:10.346671Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 24 00:50:10.347203 waagent[1940]: 2026-01-24T00:50:10.347140Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 0 ms] Jan 24 00:50:12.070640 kubelet[3286]: E0124 00:50:12.070310 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:50:12.073993 kubelet[3286]: E0124 00:50:12.073326 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:50:14.071763 kubelet[3286]: E0124 00:50:14.071683 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:50:15.069244 kubelet[3286]: E0124 00:50:15.069196 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:50:16.070746 containerd[1724]: time="2026-01-24T00:50:16.070302696Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:50:16.340067 containerd[1724]: time="2026-01-24T00:50:16.339923974Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:16.344941 containerd[1724]: time="2026-01-24T00:50:16.344816433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:50:16.345258 containerd[1724]: time="2026-01-24T00:50:16.344996436Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:50:16.345569 kubelet[3286]: E0124 00:50:16.345533 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:50:16.346374 kubelet[3286]: E0124 00:50:16.345582 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:50:16.346374 kubelet[3286]: E0124 00:50:16.345720 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a7e553b4077f468cbd98dfe225bad7bf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9m6mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66577d4d7-zhfd5_calico-system(d86b5148-8637-436b-954a-278a7b8ba7a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:16.348779 containerd[1724]: time="2026-01-24T00:50:16.348749881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:50:16.614192 containerd[1724]: time="2026-01-24T00:50:16.614099407Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:16.619804 containerd[1724]: time="2026-01-24T00:50:16.619754976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:50:16.621504 containerd[1724]: time="2026-01-24T00:50:16.619973078Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:50:16.621611 kubelet[3286]: E0124 00:50:16.620284 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:50:16.621611 kubelet[3286]: E0124 00:50:16.620342 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:50:16.621611 kubelet[3286]: E0124 00:50:16.620503 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9m6mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66577d4d7-zhfd5_calico-system(d86b5148-8637-436b-954a-278a7b8ba7a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:16.621983 kubelet[3286]: E0124 00:50:16.621941 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:50:23.067242 containerd[1724]: time="2026-01-24T00:50:23.066989792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:50:23.338389 containerd[1724]: time="2026-01-24T00:50:23.338064268Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:23.349777 containerd[1724]: time="2026-01-24T00:50:23.349613908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:50:23.349777 containerd[1724]: time="2026-01-24T00:50:23.349722409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:50:23.350873 kubelet[3286]: E0124 00:50:23.350077 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:50:23.350873 kubelet[3286]: E0124 00:50:23.350134 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:50:23.350873 kubelet[3286]: E0124 00:50:23.350405 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw48x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cbf78d979-7mqd4_calico-system(f1ccdfdb-e7f0-4061-88d9-d4f165d7633c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:23.351491 containerd[1724]: time="2026-01-24T00:50:23.351049025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:50:23.351678 kubelet[3286]: E0124 00:50:23.351616 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:50:23.626353 containerd[1724]: time="2026-01-24T00:50:23.626307652Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:23.630272 containerd[1724]: time="2026-01-24T00:50:23.630224799Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:50:23.630453 containerd[1724]: time="2026-01-24T00:50:23.630339601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:50:23.630526 kubelet[3286]: E0124 00:50:23.630453 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:50:23.630526 kubelet[3286]: E0124 00:50:23.630514 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:50:23.630737 kubelet[3286]: E0124 00:50:23.630685 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z72nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-665v9_calico-system(a99a2457-cb1f-40ec-b343-9e0ed2df6091): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:23.632253 kubelet[3286]: E0124 00:50:23.632216 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:50:26.070205 containerd[1724]: time="2026-01-24T00:50:26.069433781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:50:26.346351 containerd[1724]: time="2026-01-24T00:50:26.346244426Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:26.349192 containerd[1724]: time="2026-01-24T00:50:26.349062461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:50:26.349363 containerd[1724]: time="2026-01-24T00:50:26.349274463Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:50:26.349441 kubelet[3286]: E0124 00:50:26.349402 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:26.349789 kubelet[3286]: E0124 00:50:26.349451 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:26.349789 kubelet[3286]: E0124 00:50:26.349608 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6956bf9c49-kjw59_calico-apiserver(10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:26.351128 kubelet[3286]: E0124 00:50:26.351088 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:50:28.072538 containerd[1724]: time="2026-01-24T00:50:28.072284888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:50:28.342972 containerd[1724]: time="2026-01-24T00:50:28.342658456Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:28.346761 containerd[1724]: time="2026-01-24T00:50:28.346598404Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:50:28.346761 containerd[1724]: time="2026-01-24T00:50:28.346703605Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:50:28.348603 kubelet[3286]: E0124 00:50:28.347052 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:50:28.348603 kubelet[3286]: E0124 00:50:28.347111 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:50:28.348603 kubelet[3286]: E0124 00:50:28.347279 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4kj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hw7t4_calico-system(11ebe218-698b-4f81-b5c6-5227731a6439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:28.350801 containerd[1724]: time="2026-01-24T00:50:28.350525051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:50:28.631159 containerd[1724]: time="2026-01-24T00:50:28.631101242Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:28.637681 containerd[1724]: time="2026-01-24T00:50:28.637479119Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:50:28.637681 containerd[1724]: time="2026-01-24T00:50:28.637582421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:50:28.638542 kubelet[3286]: E0124 00:50:28.637837 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:50:28.638542 kubelet[3286]: E0124 00:50:28.637888 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:50:28.638542 kubelet[3286]: E0124 00:50:28.638035 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4kj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hw7t4_calico-system(11ebe218-698b-4f81-b5c6-5227731a6439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:28.639622 kubelet[3286]: E0124 00:50:28.639540 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:50:29.069605 containerd[1724]: time="2026-01-24T00:50:29.069026935Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:50:29.334924 containerd[1724]: time="2026-01-24T00:50:29.334784547Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:29.341248 containerd[1724]: time="2026-01-24T00:50:29.341201425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:50:29.341388 containerd[1724]: time="2026-01-24T00:50:29.341289126Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:50:29.341494 kubelet[3286]: E0124 00:50:29.341453 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:29.341569 kubelet[3286]: E0124 00:50:29.341510 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:29.341706 kubelet[3286]: E0124 00:50:29.341661 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg2lz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6956bf9c49-lv44h_calico-apiserver(c5ffb552-fd5c-4233-a4d0-bee61c2df92f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:29.343238 kubelet[3286]: E0124 00:50:29.343185 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:50:30.072201 kubelet[3286]: E0124 00:50:30.072046 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:50:38.068420 kubelet[3286]: E0124 00:50:38.067960 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:50:39.068624 kubelet[3286]: E0124 00:50:39.068348 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:50:39.069364 kubelet[3286]: E0124 00:50:39.068961 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:50:41.068845 kubelet[3286]: E0124 00:50:41.068739 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:50:42.070573 kubelet[3286]: E0124 00:50:42.070418 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:50:44.068236 kubelet[3286]: E0124 00:50:44.068189 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:50:52.071428 kubelet[3286]: E0124 00:50:52.071373 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:50:52.071963 kubelet[3286]: E0124 00:50:52.071823 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:50:52.071963 kubelet[3286]: E0124 00:50:52.071903 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:50:55.067032 kubelet[3286]: E0124 00:50:55.066604 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:50:55.069706 kubelet[3286]: E0124 00:50:55.069559 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:50:57.068194 kubelet[3286]: E0124 00:50:57.067127 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:51:03.066998 kubelet[3286]: E0124 00:51:03.066609 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:51:03.066998 kubelet[3286]: E0124 00:51:03.066673 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:51:06.069858 containerd[1724]: time="2026-01-24T00:51:06.069814334Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:51:06.345165 containerd[1724]: time="2026-01-24T00:51:06.345001250Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:06.351314 containerd[1724]: time="2026-01-24T00:51:06.351252826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:51:06.351447 containerd[1724]: time="2026-01-24T00:51:06.351264926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:51:06.351565 kubelet[3286]: E0124 00:51:06.351524 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:51:06.352093 kubelet[3286]: E0124 00:51:06.351577 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:51:06.352093 kubelet[3286]: E0124 00:51:06.351760 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw48x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cbf78d979-7mqd4_calico-system(f1ccdfdb-e7f0-4061-88d9-d4f165d7633c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:06.353135 kubelet[3286]: E0124 00:51:06.353099 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:51:07.066871 containerd[1724]: time="2026-01-24T00:51:07.066719947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:51:07.332564 containerd[1724]: time="2026-01-24T00:51:07.332290047Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:07.335786 containerd[1724]: time="2026-01-24T00:51:07.335520986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:51:07.335786 containerd[1724]: time="2026-01-24T00:51:07.335586887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:51:07.336047 kubelet[3286]: E0124 00:51:07.335805 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:51:07.336047 kubelet[3286]: E0124 00:51:07.335861 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:51:07.336561 kubelet[3286]: E0124 00:51:07.336486 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a7e553b4077f468cbd98dfe225bad7bf,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9m6mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66577d4d7-zhfd5_calico-system(d86b5148-8637-436b-954a-278a7b8ba7a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:07.339430 containerd[1724]: time="2026-01-24T00:51:07.339400733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:51:07.614063 containerd[1724]: time="2026-01-24T00:51:07.613952941Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:07.617342 containerd[1724]: time="2026-01-24T00:51:07.617219380Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:51:07.617639 containerd[1724]: time="2026-01-24T00:51:07.617262081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:51:07.618075 kubelet[3286]: E0124 00:51:07.617742 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:51:07.618075 kubelet[3286]: E0124 00:51:07.617796 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:51:07.618075 kubelet[3286]: E0124 00:51:07.617949 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9m6mz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-66577d4d7-zhfd5_calico-system(d86b5148-8637-436b-954a-278a7b8ba7a4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:07.620018 kubelet[3286]: E0124 00:51:07.619944 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:51:09.066228 kubelet[3286]: E0124 00:51:09.066176 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:51:12.068949 containerd[1724]: time="2026-01-24T00:51:12.068895749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:51:12.340427 containerd[1724]: time="2026-01-24T00:51:12.340259853Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:12.342810 containerd[1724]: time="2026-01-24T00:51:12.342750383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:51:12.342949 containerd[1724]: time="2026-01-24T00:51:12.342749183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:51:12.343091 kubelet[3286]: E0124 00:51:12.343051 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:51:12.343595 kubelet[3286]: E0124 00:51:12.343102 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:51:12.343595 kubelet[3286]: E0124 00:51:12.343277 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4kj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hw7t4_calico-system(11ebe218-698b-4f81-b5c6-5227731a6439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:12.345787 containerd[1724]: time="2026-01-24T00:51:12.345647818Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:51:12.532443 systemd[1]: Started sshd@7-10.200.4.29:22-10.200.16.10:60536.service - OpenSSH per-connection server daemon (10.200.16.10:60536). Jan 24 00:51:12.608251 containerd[1724]: time="2026-01-24T00:51:12.608008213Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:12.615511 containerd[1724]: time="2026-01-24T00:51:12.615276501Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:51:12.615511 containerd[1724]: time="2026-01-24T00:51:12.615272701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:51:12.616107 kubelet[3286]: E0124 00:51:12.616022 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:51:12.616388 kubelet[3286]: E0124 00:51:12.616184 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:51:12.616741 kubelet[3286]: E0124 00:51:12.616682 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4kj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-hw7t4_calico-system(11ebe218-698b-4f81-b5c6-5227731a6439): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:12.620268 kubelet[3286]: E0124 00:51:12.620223 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:51:13.137677 sshd[6053]: Accepted publickey for core from 10.200.16.10 port 60536 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:13.140344 sshd[6053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:13.145870 systemd-logind[1708]: New session 10 of user core. Jan 24 00:51:13.152333 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:51:13.690393 sshd[6053]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:13.696617 systemd[1]: sshd@7-10.200.4.29:22-10.200.16.10:60536.service: Deactivated successfully. Jan 24 00:51:13.698961 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:51:13.702734 systemd-logind[1708]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:51:13.705237 systemd-logind[1708]: Removed session 10. Jan 24 00:51:14.070106 containerd[1724]: time="2026-01-24T00:51:14.069469506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:51:14.345106 containerd[1724]: time="2026-01-24T00:51:14.344690956Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:14.347793 containerd[1724]: time="2026-01-24T00:51:14.347740393Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:51:14.347910 containerd[1724]: time="2026-01-24T00:51:14.347842595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:51:14.348117 kubelet[3286]: E0124 00:51:14.348044 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:14.348534 kubelet[3286]: E0124 00:51:14.348168 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:14.348534 kubelet[3286]: E0124 00:51:14.348487 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzdgt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6956bf9c49-kjw59_calico-apiserver(10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:14.350196 kubelet[3286]: E0124 00:51:14.350135 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:51:17.068016 kubelet[3286]: E0124 00:51:17.067942 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:51:18.071798 containerd[1724]: time="2026-01-24T00:51:18.071501729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:51:18.352526 containerd[1724]: time="2026-01-24T00:51:18.352281737Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:18.355124 containerd[1724]: time="2026-01-24T00:51:18.355066767Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:51:18.355266 containerd[1724]: time="2026-01-24T00:51:18.355102768Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:51:18.356721 kubelet[3286]: E0124 00:51:18.355446 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:51:18.356721 kubelet[3286]: E0124 00:51:18.355505 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:51:18.356721 kubelet[3286]: E0124 00:51:18.355660 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z72nt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-665v9_calico-system(a99a2457-cb1f-40ec-b343-9e0ed2df6091): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:18.357441 kubelet[3286]: E0124 00:51:18.357375 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:51:18.796374 systemd[1]: Started sshd@8-10.200.4.29:22-10.200.16.10:60542.service - OpenSSH per-connection server daemon (10.200.16.10:60542). Jan 24 00:51:19.419999 sshd[6078]: Accepted publickey for core from 10.200.16.10 port 60542 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:19.421097 sshd[6078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:19.430778 systemd-logind[1708]: New session 11 of user core. Jan 24 00:51:19.434919 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:51:20.055880 sshd[6078]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:20.060189 systemd-logind[1708]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:51:20.062843 systemd[1]: sshd@8-10.200.4.29:22-10.200.16.10:60542.service: Deactivated successfully. Jan 24 00:51:20.070683 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:51:20.075674 systemd-logind[1708]: Removed session 11. Jan 24 00:51:20.077516 kubelet[3286]: E0124 00:51:20.076819 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:51:22.069200 containerd[1724]: time="2026-01-24T00:51:22.069141022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:51:22.341712 containerd[1724]: time="2026-01-24T00:51:22.341561274Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:22.344254 containerd[1724]: time="2026-01-24T00:51:22.344057102Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:51:22.344254 containerd[1724]: time="2026-01-24T00:51:22.344109602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:51:22.344465 kubelet[3286]: E0124 00:51:22.344304 3286 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:22.344465 kubelet[3286]: E0124 00:51:22.344361 3286 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:22.345131 kubelet[3286]: E0124 00:51:22.344556 3286 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jg2lz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6956bf9c49-lv44h_calico-apiserver(c5ffb552-fd5c-4233-a4d0-bee61c2df92f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:22.346216 kubelet[3286]: E0124 00:51:22.346131 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:51:25.161475 systemd[1]: Started sshd@9-10.200.4.29:22-10.200.16.10:37030.service - OpenSSH per-connection server daemon (10.200.16.10:37030). Jan 24 00:51:25.764304 sshd[6092]: Accepted publickey for core from 10.200.16.10 port 37030 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:25.766218 sshd[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:25.774592 systemd-logind[1708]: New session 12 of user core. Jan 24 00:51:25.782311 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:51:26.071598 kubelet[3286]: E0124 00:51:26.070734 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:51:26.288566 sshd[6092]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:26.294982 systemd[1]: sshd@9-10.200.4.29:22-10.200.16.10:37030.service: Deactivated successfully. Jan 24 00:51:26.298590 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:51:26.302126 systemd-logind[1708]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:51:26.305489 systemd-logind[1708]: Removed session 12. Jan 24 00:51:26.405255 systemd[1]: Started sshd@10-10.200.4.29:22-10.200.16.10:37042.service - OpenSSH per-connection server daemon (10.200.16.10:37042). Jan 24 00:51:27.007000 sshd[6106]: Accepted publickey for core from 10.200.16.10 port 37042 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:27.007880 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:27.014053 systemd-logind[1708]: New session 13 of user core. Jan 24 00:51:27.016592 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:51:27.069469 kubelet[3286]: E0124 00:51:27.069394 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:51:27.610647 sshd[6106]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:27.616817 systemd-logind[1708]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:51:27.619707 systemd[1]: sshd@10-10.200.4.29:22-10.200.16.10:37042.service: Deactivated successfully. Jan 24 00:51:27.623761 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:51:27.625054 systemd-logind[1708]: Removed session 13. Jan 24 00:51:27.724958 systemd[1]: Started sshd@11-10.200.4.29:22-10.200.16.10:37050.service - OpenSSH per-connection server daemon (10.200.16.10:37050). Jan 24 00:51:28.333186 sshd[6117]: Accepted publickey for core from 10.200.16.10 port 37050 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:28.334591 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:28.339843 systemd-logind[1708]: New session 14 of user core. Jan 24 00:51:28.345333 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:51:28.833906 sshd[6117]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:28.839380 systemd[1]: sshd@11-10.200.4.29:22-10.200.16.10:37050.service: Deactivated successfully. Jan 24 00:51:28.844229 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:51:28.845452 systemd-logind[1708]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:51:28.846667 systemd-logind[1708]: Removed session 14. Jan 24 00:51:29.066832 kubelet[3286]: E0124 00:51:29.066775 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:51:31.067726 kubelet[3286]: E0124 00:51:31.067674 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:51:32.070116 kubelet[3286]: E0124 00:51:32.069995 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:51:33.949242 systemd[1]: Started sshd@12-10.200.4.29:22-10.200.16.10:54558.service - OpenSSH per-connection server daemon (10.200.16.10:54558). Jan 24 00:51:34.561735 sshd[6132]: Accepted publickey for core from 10.200.16.10 port 54558 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:34.564260 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:34.570131 systemd-logind[1708]: New session 15 of user core. Jan 24 00:51:34.580270 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:51:35.052092 sshd[6132]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:35.056035 systemd[1]: sshd@12-10.200.4.29:22-10.200.16.10:54558.service: Deactivated successfully. Jan 24 00:51:35.058134 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:51:35.058937 systemd-logind[1708]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:51:35.059919 systemd-logind[1708]: Removed session 15. Jan 24 00:51:37.066986 kubelet[3286]: E0124 00:51:37.066932 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:51:38.067547 kubelet[3286]: E0124 00:51:38.067467 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:51:40.168449 systemd[1]: Started sshd@13-10.200.4.29:22-10.200.16.10:58104.service - OpenSSH per-connection server daemon (10.200.16.10:58104). Jan 24 00:51:40.804121 sshd[6171]: Accepted publickey for core from 10.200.16.10 port 58104 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:40.807190 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:40.813475 systemd-logind[1708]: New session 16 of user core. Jan 24 00:51:40.820353 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:51:41.068304 kubelet[3286]: E0124 00:51:41.066883 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:51:41.296225 sshd[6171]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:41.299377 systemd[1]: sshd@13-10.200.4.29:22-10.200.16.10:58104.service: Deactivated successfully. Jan 24 00:51:41.301700 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:51:41.303498 systemd-logind[1708]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:51:41.305085 systemd-logind[1708]: Removed session 16. Jan 24 00:51:42.073005 kubelet[3286]: E0124 00:51:42.072948 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:51:43.068488 kubelet[3286]: E0124 00:51:43.068435 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:51:46.070118 kubelet[3286]: E0124 00:51:46.070024 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:51:46.415435 systemd[1]: Started sshd@14-10.200.4.29:22-10.200.16.10:58114.service - OpenSSH per-connection server daemon (10.200.16.10:58114). Jan 24 00:51:47.025748 sshd[6184]: Accepted publickey for core from 10.200.16.10 port 58114 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:47.029199 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:47.039356 systemd-logind[1708]: New session 17 of user core. Jan 24 00:51:47.043345 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:51:47.542080 sshd[6184]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:47.547296 systemd-logind[1708]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:51:47.548532 systemd[1]: sshd@14-10.200.4.29:22-10.200.16.10:58114.service: Deactivated successfully. Jan 24 00:51:47.550930 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:51:47.553725 systemd-logind[1708]: Removed session 17. Jan 24 00:51:47.649937 systemd[1]: Started sshd@15-10.200.4.29:22-10.200.16.10:58128.service - OpenSSH per-connection server daemon (10.200.16.10:58128). Jan 24 00:51:48.071519 kubelet[3286]: E0124 00:51:48.071409 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:51:48.271172 sshd[6197]: Accepted publickey for core from 10.200.16.10 port 58128 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:48.272238 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:48.279481 systemd-logind[1708]: New session 18 of user core. Jan 24 00:51:48.286310 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:51:48.834078 sshd[6197]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:48.839683 systemd-logind[1708]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:51:48.840628 systemd[1]: sshd@15-10.200.4.29:22-10.200.16.10:58128.service: Deactivated successfully. Jan 24 00:51:48.844137 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:51:48.847670 systemd-logind[1708]: Removed session 18. Jan 24 00:51:48.949448 systemd[1]: Started sshd@16-10.200.4.29:22-10.200.16.10:58140.service - OpenSSH per-connection server daemon (10.200.16.10:58140). Jan 24 00:51:49.548577 sshd[6208]: Accepted publickey for core from 10.200.16.10 port 58140 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:49.550058 sshd[6208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:49.554622 systemd-logind[1708]: New session 19 of user core. Jan 24 00:51:49.557322 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:51:50.758465 sshd[6208]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:50.763594 systemd[1]: sshd@16-10.200.4.29:22-10.200.16.10:58140.service: Deactivated successfully. Jan 24 00:51:50.767206 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:51:50.772081 systemd-logind[1708]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:51:50.773979 systemd-logind[1708]: Removed session 19. Jan 24 00:51:50.876008 systemd[1]: Started sshd@17-10.200.4.29:22-10.200.16.10:39726.service - OpenSSH per-connection server daemon (10.200.16.10:39726). Jan 24 00:51:51.495075 sshd[6226]: Accepted publickey for core from 10.200.16.10 port 39726 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:51.497483 sshd[6226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:51.504640 systemd-logind[1708]: New session 20 of user core. Jan 24 00:51:51.513341 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:51:52.226080 sshd[6226]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:52.229841 systemd[1]: sshd@17-10.200.4.29:22-10.200.16.10:39726.service: Deactivated successfully. Jan 24 00:51:52.232343 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:51:52.234368 systemd-logind[1708]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:51:52.235731 systemd-logind[1708]: Removed session 20. Jan 24 00:51:52.340224 systemd[1]: Started sshd@18-10.200.4.29:22-10.200.16.10:39730.service - OpenSSH per-connection server daemon (10.200.16.10:39730). Jan 24 00:51:52.941011 sshd[6237]: Accepted publickey for core from 10.200.16.10 port 39730 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:52.943789 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:52.951384 systemd-logind[1708]: New session 21 of user core. Jan 24 00:51:52.957350 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:51:53.066489 kubelet[3286]: E0124 00:51:53.066263 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:51:53.067981 kubelet[3286]: E0124 00:51:53.066950 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:51:53.458404 sshd[6237]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:53.464626 systemd[1]: sshd@18-10.200.4.29:22-10.200.16.10:39730.service: Deactivated successfully. Jan 24 00:51:53.469667 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:51:53.471277 systemd-logind[1708]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:51:53.473588 systemd-logind[1708]: Removed session 21. Jan 24 00:51:54.067371 kubelet[3286]: E0124 00:51:54.067252 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:51:55.067046 kubelet[3286]: E0124 00:51:55.066997 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:51:58.070500 kubelet[3286]: E0124 00:51:58.070443 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:51:58.574290 systemd[1]: Started sshd@19-10.200.4.29:22-10.200.16.10:39740.service - OpenSSH per-connection server daemon (10.200.16.10:39740). Jan 24 00:51:59.181638 sshd[6255]: Accepted publickey for core from 10.200.16.10 port 39740 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:59.183913 sshd[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:59.191821 systemd-logind[1708]: New session 22 of user core. Jan 24 00:51:59.196321 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:51:59.708686 sshd[6255]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:59.712976 systemd-logind[1708]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:51:59.715687 systemd[1]: sshd@19-10.200.4.29:22-10.200.16.10:39740.service: Deactivated successfully. Jan 24 00:51:59.720050 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:51:59.722260 systemd-logind[1708]: Removed session 22. Jan 24 00:52:00.068542 kubelet[3286]: E0124 00:52:00.068324 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:52:04.067586 kubelet[3286]: E0124 00:52:04.067537 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:52:04.823254 systemd[1]: Started sshd@20-10.200.4.29:22-10.200.16.10:48400.service - OpenSSH per-connection server daemon (10.200.16.10:48400). Jan 24 00:52:05.427831 sshd[6271]: Accepted publickey for core from 10.200.16.10 port 48400 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:05.429812 sshd[6271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:05.442542 systemd-logind[1708]: New session 23 of user core. Jan 24 00:52:05.447472 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:52:05.942691 sshd[6271]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:05.946938 systemd-logind[1708]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:52:05.949193 systemd[1]: sshd@20-10.200.4.29:22-10.200.16.10:48400.service: Deactivated successfully. Jan 24 00:52:05.954861 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:52:05.959208 systemd-logind[1708]: Removed session 23. Jan 24 00:52:08.069954 kubelet[3286]: E0124 00:52:08.069686 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:52:08.071867 kubelet[3286]: E0124 00:52:08.071002 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:52:09.067976 kubelet[3286]: E0124 00:52:09.067900 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:52:10.070416 kubelet[3286]: E0124 00:52:10.070367 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:52:11.057254 systemd[1]: Started sshd@21-10.200.4.29:22-10.200.16.10:56928.service - OpenSSH per-connection server daemon (10.200.16.10:56928). Jan 24 00:52:11.671314 sshd[6308]: Accepted publickey for core from 10.200.16.10 port 56928 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:11.673173 sshd[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:11.679553 systemd-logind[1708]: New session 24 of user core. Jan 24 00:52:11.684321 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:52:12.218071 sshd[6308]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:12.222664 systemd-logind[1708]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:52:12.224806 systemd[1]: sshd@21-10.200.4.29:22-10.200.16.10:56928.service: Deactivated successfully. Jan 24 00:52:12.229853 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:52:12.235827 systemd-logind[1708]: Removed session 24. Jan 24 00:52:13.066878 kubelet[3286]: E0124 00:52:13.066811 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:52:17.330844 systemd[1]: Started sshd@22-10.200.4.29:22-10.200.16.10:56938.service - OpenSSH per-connection server daemon (10.200.16.10:56938). Jan 24 00:52:17.942174 sshd[6322]: Accepted publickey for core from 10.200.16.10 port 56938 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:17.940600 sshd[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:17.952357 systemd-logind[1708]: New session 25 of user core. Jan 24 00:52:17.960352 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:52:18.443435 sshd[6322]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:18.448363 systemd[1]: sshd@22-10.200.4.29:22-10.200.16.10:56938.service: Deactivated successfully. Jan 24 00:52:18.452041 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:52:18.453062 systemd-logind[1708]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:52:18.456024 systemd-logind[1708]: Removed session 25. Jan 24 00:52:19.067177 kubelet[3286]: E0124 00:52:19.067088 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cbf78d979-7mqd4" podUID="f1ccdfdb-e7f0-4061-88d9-d4f165d7633c" Jan 24 00:52:19.068646 kubelet[3286]: E0124 00:52:19.067040 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-kjw59" podUID="10ee6a73-4c28-4f2c-91e8-1fb5bb1182ff" Jan 24 00:52:22.069104 kubelet[3286]: E0124 00:52:22.068517 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-665v9" podUID="a99a2457-cb1f-40ec-b343-9e0ed2df6091" Jan 24 00:52:22.069663 kubelet[3286]: E0124 00:52:22.069427 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-66577d4d7-zhfd5" podUID="d86b5148-8637-436b-954a-278a7b8ba7a4" Jan 24 00:52:23.559743 systemd[1]: Started sshd@23-10.200.4.29:22-10.200.16.10:32892.service - OpenSSH per-connection server daemon (10.200.16.10:32892). Jan 24 00:52:24.070402 kubelet[3286]: E0124 00:52:24.070060 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-hw7t4" podUID="11ebe218-698b-4f81-b5c6-5227731a6439" Jan 24 00:52:24.176883 sshd[6341]: Accepted publickey for core from 10.200.16.10 port 32892 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:24.178342 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:24.185261 systemd-logind[1708]: New session 26 of user core. Jan 24 00:52:24.189332 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 00:52:24.684739 sshd[6341]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:24.690829 systemd[1]: sshd@23-10.200.4.29:22-10.200.16.10:32892.service: Deactivated successfully. Jan 24 00:52:24.697650 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 00:52:24.698956 systemd-logind[1708]: Session 26 logged out. Waiting for processes to exit. Jan 24 00:52:24.701045 systemd-logind[1708]: Removed session 26. Jan 24 00:52:28.069470 kubelet[3286]: E0124 00:52:28.069059 3286 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6956bf9c49-lv44h" podUID="c5ffb552-fd5c-4233-a4d0-bee61c2df92f" Jan 24 00:52:28.223690 kubelet[3286]: E0124 00:52:28.223220 3286 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: EOF" event="&Event{ObjectMeta:{calico-apiserver-6956bf9c49-lv44h.188d84620e9dcc74 calico-apiserver 1779 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-6956bf9c49-lv44h,UID:c5ffb552-fd5c-4233-a4d0-bee61c2df92f,APIVersion:v1,ResourceVersion:840,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-f1b70866be,},FirstTimestamp:2026-01-24 00:49:45 +0000 UTC,LastTimestamp:2026-01-24 00:52:28.068964545 +0000 UTC m=+212.119109201,Count:11,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-f1b70866be,}"