Jan 24 00:47:34.065712 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:47:34.065740 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:47:34.065750 kernel: BIOS-provided physical RAM map: Jan 24 00:47:34.065758 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:47:34.065766 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 24 00:47:34.065772 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 24 00:47:34.065780 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 24 00:47:34.065791 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 24 00:47:34.065800 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 24 00:47:34.065807 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 24 00:47:34.065813 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 24 00:47:34.065823 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 24 00:47:34.065829 kernel: printk: bootconsole [earlyser0] enabled Jan 24 00:47:34.065836 kernel: NX (Execute Disable) protection: active Jan 24 00:47:34.065849 kernel: APIC: Static calls initialized Jan 24 00:47:34.067935 kernel: efi: EFI v2.7 by Microsoft Jan 24 00:47:34.067963 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee73a98 Jan 24 00:47:34.067979 kernel: SMBIOS 3.1.0 present. Jan 24 00:47:34.067991 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 24 00:47:34.068005 kernel: Hypervisor detected: Microsoft Hyper-V Jan 24 00:47:34.068018 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 24 00:47:34.068031 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jan 24 00:47:34.068044 kernel: Hyper-V: Nested features: 0x1e0101 Jan 24 00:47:34.068057 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 24 00:47:34.068074 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 24 00:47:34.068086 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:47:34.068099 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:47:34.068113 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 24 00:47:34.068125 kernel: tsc: Detected 2593.907 MHz processor Jan 24 00:47:34.068137 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:47:34.068150 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:47:34.068162 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 24 00:47:34.068175 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:47:34.068191 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:47:34.068203 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 24 00:47:34.068216 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 24 00:47:34.068228 kernel: Using GB pages for direct mapping Jan 24 00:47:34.068241 kernel: Secure boot disabled Jan 24 00:47:34.068252 kernel: ACPI: Early table checksum verification disabled Jan 24 00:47:34.068265 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 24 00:47:34.068283 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068298 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068312 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 24 00:47:34.068327 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 24 00:47:34.068341 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068356 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068370 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068387 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068402 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068415 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068429 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068442 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 24 00:47:34.068456 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 24 00:47:34.068469 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 24 00:47:34.068483 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 24 00:47:34.068499 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 24 00:47:34.068512 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 24 00:47:34.068525 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 24 00:47:34.068538 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 24 00:47:34.068551 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 24 00:47:34.068564 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 24 00:47:34.068577 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:47:34.068590 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:47:34.068603 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 24 00:47:34.068619 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 24 00:47:34.068632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 24 00:47:34.068646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 24 00:47:34.068659 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 24 00:47:34.068672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 24 00:47:34.068685 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 24 00:47:34.068699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 24 00:47:34.068712 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 24 00:47:34.068726 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 24 00:47:34.068742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 24 00:47:34.068756 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 24 00:47:34.068770 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 24 00:47:34.068784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 24 00:47:34.068797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 24 00:47:34.068811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 24 00:47:34.068824 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 24 00:47:34.068838 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 24 00:47:34.068852 kernel: Zone ranges: Jan 24 00:47:34.068890 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:47:34.068904 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:47:34.068917 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:47:34.068931 kernel: Movable zone start for each node Jan 24 00:47:34.068944 kernel: Early memory node ranges Jan 24 00:47:34.068958 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:47:34.068971 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 24 00:47:34.068985 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 24 00:47:34.068998 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:47:34.069015 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 24 00:47:34.069028 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:47:34.069042 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:47:34.069055 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 24 00:47:34.069069 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 24 00:47:34.069083 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 24 00:47:34.069097 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:47:34.069111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:47:34.069124 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:47:34.069141 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 24 00:47:34.069154 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:47:34.069168 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 24 00:47:34.069181 kernel: Booting paravirtualized kernel on Hyper-V Jan 24 00:47:34.069196 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:47:34.069210 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:47:34.069224 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:47:34.069237 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:47:34.069250 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:47:34.069267 kernel: Hyper-V: PV spinlocks enabled Jan 24 00:47:34.069280 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:47:34.069296 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:47:34.069308 kernel: random: crng init done Jan 24 00:47:34.069321 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 24 00:47:34.069337 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:47:34.069352 kernel: Fallback order for Node 0: 0 Jan 24 00:47:34.069368 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 24 00:47:34.069387 kernel: Policy zone: Normal Jan 24 00:47:34.069415 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:47:34.069431 kernel: software IO TLB: area num 2. Jan 24 00:47:34.069452 kernel: Memory: 8077080K/8387460K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 310120K reserved, 0K cma-reserved) Jan 24 00:47:34.069468 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:47:34.069485 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:47:34.069500 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:47:34.069515 kernel: Dynamic Preempt: voluntary Jan 24 00:47:34.069531 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:47:34.069547 kernel: rcu: RCU event tracing is enabled. Jan 24 00:47:34.069566 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:47:34.069580 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:47:34.069595 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:47:34.069610 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:47:34.069624 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:47:34.069638 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:47:34.069655 kernel: Using NULL legacy PIC Jan 24 00:47:34.069669 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 24 00:47:34.069683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:47:34.069697 kernel: Console: colour dummy device 80x25 Jan 24 00:47:34.069711 kernel: printk: console [tty1] enabled Jan 24 00:47:34.069726 kernel: printk: console [ttyS0] enabled Jan 24 00:47:34.069740 kernel: printk: bootconsole [earlyser0] disabled Jan 24 00:47:34.069754 kernel: ACPI: Core revision 20230628 Jan 24 00:47:34.069768 kernel: Failed to register legacy timer interrupt Jan 24 00:47:34.069782 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:47:34.069800 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 24 00:47:34.069814 kernel: Hyper-V: Using IPI hypercalls Jan 24 00:47:34.069829 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 24 00:47:34.069843 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 24 00:47:34.070905 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 24 00:47:34.070921 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 24 00:47:34.070931 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 24 00:47:34.070940 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 24 00:47:34.070951 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 24 00:47:34.070967 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:47:34.070978 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:47:34.070989 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:47:34.070998 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:47:34.071008 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:47:34.071019 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:47:34.071028 kernel: RETBleed: Vulnerable Jan 24 00:47:34.071039 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:47:34.071047 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:47:34.071057 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:47:34.071069 kernel: active return thunk: its_return_thunk Jan 24 00:47:34.071079 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:47:34.071089 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:47:34.071097 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:47:34.071108 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:47:34.071116 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:47:34.071128 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:47:34.071136 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:47:34.071146 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:47:34.071156 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:47:34.071164 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:47:34.071177 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:47:34.071185 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 24 00:47:34.071196 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:47:34.071204 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:47:34.071214 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:47:34.071224 kernel: landlock: Up and running. Jan 24 00:47:34.071233 kernel: SELinux: Initializing. Jan 24 00:47:34.071243 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:47:34.071251 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:47:34.071263 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:47:34.071271 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:47:34.071284 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:47:34.071293 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:47:34.071304 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:47:34.071312 kernel: signal: max sigframe size: 3632 Jan 24 00:47:34.071322 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:47:34.071332 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:47:34.071340 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:47:34.071352 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:47:34.071363 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:47:34.071376 kernel: .... node #0, CPUs: #1 Jan 24 00:47:34.071386 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 24 00:47:34.071397 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:47:34.071408 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:47:34.071418 kernel: smpboot: Max logical packages: 1 Jan 24 00:47:34.071427 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 24 00:47:34.071436 kernel: devtmpfs: initialized Jan 24 00:47:34.071447 kernel: x86/mm: Memory block size: 128MB Jan 24 00:47:34.071459 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 24 00:47:34.071469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:47:34.071477 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:47:34.071489 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:47:34.071497 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:47:34.071508 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:47:34.071517 kernel: audit: type=2000 audit(1769215652.029:1): state=initialized audit_enabled=0 res=1 Jan 24 00:47:34.071525 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:47:34.071533 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:47:34.071543 kernel: cpuidle: using governor menu Jan 24 00:47:34.071552 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:47:34.071563 kernel: dca service started, version 1.12.1 Jan 24 00:47:34.071571 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 24 00:47:34.071579 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:47:34.071588 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:47:34.071596 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:47:34.071606 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:47:34.071616 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:47:34.071626 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:47:34.071638 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:47:34.071646 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:47:34.071657 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:47:34.071666 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:47:34.071676 kernel: ACPI: Interpreter enabled Jan 24 00:47:34.071685 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:47:34.071693 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:47:34.071705 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:47:34.071716 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 24 00:47:34.071728 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 24 00:47:34.071736 kernel: iommu: Default domain type: Translated Jan 24 00:47:34.071744 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:47:34.071755 kernel: efivars: Registered efivars operations Jan 24 00:47:34.071763 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:47:34.071772 kernel: PCI: System does not support PCI Jan 24 00:47:34.071783 kernel: vgaarb: loaded Jan 24 00:47:34.071791 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 24 00:47:34.071804 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:47:34.071812 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:47:34.071822 kernel: pnp: PnP ACPI init Jan 24 00:47:34.071832 kernel: pnp: PnP ACPI: found 3 devices Jan 24 00:47:34.071843 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:47:34.071851 kernel: NET: Registered PF_INET protocol family Jan 24 00:47:34.071869 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:47:34.071881 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 24 00:47:34.071891 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:47:34.071906 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:47:34.071914 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 24 00:47:34.071925 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 24 00:47:34.071934 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:47:34.071942 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:47:34.071952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:47:34.071962 kernel: NET: Registered PF_XDP protocol family Jan 24 00:47:34.071970 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:47:34.071980 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:47:34.071992 kernel: software IO TLB: mapped [mem 0x000000003ae73000-0x000000003ee73000] (64MB) Jan 24 00:47:34.072001 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:47:34.072012 kernel: Initialise system trusted keyrings Jan 24 00:47:34.072021 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 24 00:47:34.072032 kernel: Key type asymmetric registered Jan 24 00:47:34.072040 kernel: Asymmetric key parser 'x509' registered Jan 24 00:47:34.072050 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:47:34.072060 kernel: io scheduler mq-deadline registered Jan 24 00:47:34.072068 kernel: io scheduler kyber registered Jan 24 00:47:34.072081 kernel: io scheduler bfq registered Jan 24 00:47:34.072089 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:47:34.072101 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:47:34.072109 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:47:34.072117 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 24 00:47:34.072127 kernel: i8042: PNP: No PS/2 controller found. Jan 24 00:47:34.072270 kernel: rtc_cmos 00:02: registered as rtc0 Jan 24 00:47:34.072377 kernel: rtc_cmos 00:02: setting system clock to 2026-01-24T00:47:33 UTC (1769215653) Jan 24 00:47:34.072477 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 24 00:47:34.072490 kernel: intel_pstate: CPU model not supported Jan 24 00:47:34.072499 kernel: efifb: probing for efifb Jan 24 00:47:34.072510 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 24 00:47:34.072518 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 24 00:47:34.072527 kernel: efifb: scrolling: redraw Jan 24 00:47:34.072535 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:47:34.072546 kernel: Console: switching to colour frame buffer device 128x48 Jan 24 00:47:34.072554 kernel: fb0: EFI VGA frame buffer device Jan 24 00:47:34.072567 kernel: pstore: Using crash dump compression: deflate Jan 24 00:47:34.072577 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:47:34.072585 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:47:34.072595 kernel: Segment Routing with IPv6 Jan 24 00:47:34.072604 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:47:34.072612 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:47:34.072622 kernel: Key type dns_resolver registered Jan 24 00:47:34.072632 kernel: IPI shorthand broadcast: enabled Jan 24 00:47:34.072641 kernel: sched_clock: Marking stable (840002600, 44460800)->(1075423800, -190960400) Jan 24 00:47:34.072654 kernel: registered taskstats version 1 Jan 24 00:47:34.072665 kernel: Loading compiled-in X.509 certificates Jan 24 00:47:34.072674 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:47:34.072687 kernel: Key type .fscrypt registered Jan 24 00:47:34.072696 kernel: Key type fscrypt-provisioning registered Jan 24 00:47:34.072707 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:47:34.072716 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:47:34.072724 kernel: ima: No architecture policies found Jan 24 00:47:34.072735 kernel: clk: Disabling unused clocks Jan 24 00:47:34.072747 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:47:34.072758 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:47:34.072769 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:47:34.072778 kernel: Run /init as init process Jan 24 00:47:34.072789 kernel: with arguments: Jan 24 00:47:34.072797 kernel: /init Jan 24 00:47:34.072807 kernel: with environment: Jan 24 00:47:34.072818 kernel: HOME=/ Jan 24 00:47:34.072826 kernel: TERM=linux Jan 24 00:47:34.072839 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:47:34.072852 systemd[1]: Detected virtualization microsoft. Jan 24 00:47:34.075885 systemd[1]: Detected architecture x86-64. Jan 24 00:47:34.075898 systemd[1]: Running in initrd. Jan 24 00:47:34.075910 systemd[1]: No hostname configured, using default hostname. Jan 24 00:47:34.075919 systemd[1]: Hostname set to . Jan 24 00:47:34.075930 systemd[1]: Initializing machine ID from random generator. Jan 24 00:47:34.075944 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:47:34.075956 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:47:34.075965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:47:34.075977 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:47:34.075987 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:47:34.075996 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:47:34.076008 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:47:34.076022 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:47:34.076033 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:47:34.076042 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:47:34.076054 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:47:34.076063 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:47:34.076074 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:47:34.076083 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:47:34.076094 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:47:34.076106 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:47:34.076117 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:47:34.076127 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:47:34.076136 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:47:34.076147 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:47:34.076156 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:47:34.076168 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:47:34.076177 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:47:34.076188 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:47:34.076203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:47:34.076212 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:47:34.076223 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:47:34.076233 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:47:34.076242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:47:34.076277 systemd-journald[177]: Collecting audit messages is disabled. Jan 24 00:47:34.076304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:34.076317 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:47:34.076326 systemd-journald[177]: Journal started Jan 24 00:47:34.076349 systemd-journald[177]: Runtime Journal (/run/log/journal/6f90b73944f846b3ab51321c0a93bc66) is 8.0M, max 158.8M, 150.8M free. Jan 24 00:47:34.085398 systemd-modules-load[178]: Inserted module 'overlay' Jan 24 00:47:34.093057 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:47:34.096158 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:47:34.102799 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:47:34.121040 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:47:34.124985 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:47:34.141967 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:47:34.142764 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:47:34.152053 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 24 00:47:34.154600 kernel: Bridge firewalling registered Jan 24 00:47:34.154916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:34.160740 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:47:34.171076 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:47:34.186028 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:47:34.206037 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:47:34.210171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:47:34.220311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:47:34.227015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:47:34.227253 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:47:34.237126 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:47:34.246405 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:47:34.252347 dracut-cmdline[213]: dracut-dracut-053 Jan 24 00:47:34.254920 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:47:34.312201 systemd-resolved[218]: Positive Trust Anchors: Jan 24 00:47:34.312223 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:47:34.312268 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:47:34.337784 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 24 00:47:34.339036 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:47:34.347557 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:47:34.355875 kernel: SCSI subsystem initialized Jan 24 00:47:34.366875 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:47:34.376879 kernel: iscsi: registered transport (tcp) Jan 24 00:47:34.398225 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:47:34.398286 kernel: QLogic iSCSI HBA Driver Jan 24 00:47:34.434709 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:47:34.446982 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:47:34.478285 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:47:34.478369 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:47:34.481647 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:47:34.521887 kernel: raid6: avx512x4 gen() 18480 MB/s Jan 24 00:47:34.540876 kernel: raid6: avx512x2 gen() 18404 MB/s Jan 24 00:47:34.559872 kernel: raid6: avx512x1 gen() 18367 MB/s Jan 24 00:47:34.578870 kernel: raid6: avx2x4 gen() 18366 MB/s Jan 24 00:47:34.597877 kernel: raid6: avx2x2 gen() 18331 MB/s Jan 24 00:47:34.617669 kernel: raid6: avx2x1 gen() 13943 MB/s Jan 24 00:47:34.617706 kernel: raid6: using algorithm avx512x4 gen() 18480 MB/s Jan 24 00:47:34.638708 kernel: raid6: .... xor() 8260 MB/s, rmw enabled Jan 24 00:47:34.638735 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:47:34.661884 kernel: xor: automatically using best checksumming function avx Jan 24 00:47:34.808887 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:47:34.818696 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:47:34.827027 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:47:34.840819 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 24 00:47:34.845424 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:47:34.861006 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:47:34.872893 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jan 24 00:47:34.900700 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:47:34.910004 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:47:34.951369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:47:34.963058 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:47:34.977673 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:47:34.987448 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:47:34.994068 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:47:34.997114 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:47:35.016068 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:47:35.033883 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:47:35.056890 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:47:35.057312 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:47:35.068805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:47:35.079138 kernel: AES CTR mode by8 optimization enabled Jan 24 00:47:35.069031 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:47:35.082693 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:47:35.088924 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:47:35.089189 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:35.103623 kernel: hv_vmbus: Vmbus version:5.2 Jan 24 00:47:35.107825 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:35.128876 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 24 00:47:35.124337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:35.133383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:47:35.142016 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 24 00:47:35.142040 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 24 00:47:35.136100 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:35.156941 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 24 00:47:35.157266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:35.173884 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:47:35.182939 kernel: hv_vmbus: registering driver hv_storvsc Jan 24 00:47:35.186877 kernel: hv_vmbus: registering driver hid_hyperv Jan 24 00:47:35.191870 kernel: scsi host1: storvsc_host_t Jan 24 00:47:35.192062 kernel: scsi host0: storvsc_host_t Jan 24 00:47:35.192185 kernel: PTP clock support registered Jan 24 00:47:35.198875 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 24 00:47:35.198910 kernel: hv_utils: Registering HyperV Utility Driver Jan 24 00:47:35.202029 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 24 00:47:35.202069 kernel: hv_vmbus: registering driver hv_utils Jan 24 00:47:35.213026 kernel: hv_utils: Shutdown IC version 3.2 Jan 24 00:47:35.213062 kernel: hv_utils: Heartbeat IC version 3.0 Jan 24 00:47:35.213085 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 24 00:47:35.213233 kernel: hv_utils: TimeSync IC version 4.0 Jan 24 00:47:35.334357 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 24 00:47:35.332184 systemd-resolved[218]: Clock change detected. Flushing caches. Jan 24 00:47:35.349884 kernel: hv_vmbus: registering driver hv_netvsc Jan 24 00:47:35.342981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:35.353996 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:47:35.372688 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 24 00:47:35.373033 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:47:35.378812 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 24 00:47:35.400173 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:47:35.415838 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 24 00:47:35.416073 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 24 00:47:35.416206 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:47:35.416350 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 24 00:47:35.416462 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 24 00:47:35.419780 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:47:35.422787 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:47:35.590790 kernel: hv_netvsc 7c1e521d-addc-7c1e-521d-addc7c1e521d eth0: VF slot 1 added Jan 24 00:47:35.599751 kernel: hv_vmbus: registering driver hv_pci Jan 24 00:47:35.599805 kernel: hv_pci ba335987-aebe-493f-9405-6d4ab83a95b7: PCI VMBus probing: Using version 0x10004 Jan 24 00:47:35.608541 kernel: hv_pci ba335987-aebe-493f-9405-6d4ab83a95b7: PCI host bridge to bus aebe:00 Jan 24 00:47:35.608823 kernel: pci_bus aebe:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 24 00:47:35.611593 kernel: pci_bus aebe:00: No busn resource found for root bus, will use [bus 00-ff] Jan 24 00:47:35.615833 kernel: pci aebe:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 24 00:47:35.623758 kernel: pci aebe:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:47:35.628134 kernel: pci aebe:00:02.0: enabling Extended Tags Jan 24 00:47:35.639815 kernel: pci aebe:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at aebe:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 24 00:47:35.646039 kernel: pci_bus aebe:00: busn_res: [bus 00-ff] end is updated to 00 Jan 24 00:47:35.646340 kernel: pci aebe:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:47:35.821151 kernel: mlx5_core aebe:00:02.0: enabling device (0000 -> 0002) Jan 24 00:47:35.825791 kernel: mlx5_core aebe:00:02.0: firmware version: 14.30.5026 Jan 24 00:47:36.001933 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 24 00:47:36.021791 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (467) Jan 24 00:47:36.041049 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (447) Jan 24 00:47:36.050543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 24 00:47:36.063939 kernel: hv_netvsc 7c1e521d-addc-7c1e-521d-addc7c1e521d eth0: VF registering: eth1 Jan 24 00:47:36.071092 kernel: mlx5_core aebe:00:02.0 eth1: joined to eth0 Jan 24 00:47:36.071348 kernel: mlx5_core aebe:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 24 00:47:36.071856 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 24 00:47:36.084025 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 24 00:47:36.092231 kernel: mlx5_core aebe:00:02.0 enP44734s1: renamed from eth1 Jan 24 00:47:36.092242 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 24 00:47:36.106970 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:47:36.129782 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:47:36.139781 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:47:36.148810 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:47:37.159783 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:47:37.160458 disk-uuid[603]: The operation has completed successfully. Jan 24 00:47:37.263265 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:47:37.263393 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:47:37.292915 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:47:37.299855 sh[716]: Success Jan 24 00:47:37.333801 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:47:37.668051 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:47:37.670865 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:47:37.679109 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:47:37.695784 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:47:37.695822 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:47:37.701585 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:47:37.704660 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:47:37.707170 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:47:38.079334 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:47:38.080321 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:47:38.090019 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:47:38.095938 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:47:38.117900 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:47:38.117959 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:47:38.120418 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:47:38.165792 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:47:38.177285 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:47:38.182867 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:47:38.193778 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:47:38.199266 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:47:38.209980 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:47:38.217736 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:47:38.244920 systemd-networkd[901]: lo: Link UP Jan 24 00:47:38.244929 systemd-networkd[901]: lo: Gained carrier Jan 24 00:47:38.247147 systemd-networkd[901]: Enumeration completed Jan 24 00:47:38.247597 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:47:38.249010 systemd[1]: Reached target network.target - Network. Jan 24 00:47:38.249740 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:47:38.249744 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:47:38.304790 kernel: mlx5_core aebe:00:02.0 enP44734s1: Link up Jan 24 00:47:38.338393 kernel: hv_netvsc 7c1e521d-addc-7c1e-521d-addc7c1e521d eth0: Data path switched to VF: enP44734s1 Jan 24 00:47:38.337814 systemd-networkd[901]: enP44734s1: Link UP Jan 24 00:47:38.338001 systemd-networkd[901]: eth0: Link UP Jan 24 00:47:38.338247 systemd-networkd[901]: eth0: Gained carrier Jan 24 00:47:38.338261 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:47:38.342966 systemd-networkd[901]: enP44734s1: Gained carrier Jan 24 00:47:38.370813 systemd-networkd[901]: eth0: DHCPv4 address 10.200.4.5/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:47:39.249573 ignition[899]: Ignition 2.19.0 Jan 24 00:47:39.249586 ignition[899]: Stage: fetch-offline Jan 24 00:47:39.249631 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:39.249642 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:39.249777 ignition[899]: parsed url from cmdline: "" Jan 24 00:47:39.249783 ignition[899]: no config URL provided Jan 24 00:47:39.249791 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:47:39.249803 ignition[899]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:47:39.249810 ignition[899]: failed to fetch config: resource requires networking Jan 24 00:47:39.251567 ignition[899]: Ignition finished successfully Jan 24 00:47:39.269984 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:47:39.279025 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:47:39.296686 ignition[910]: Ignition 2.19.0 Jan 24 00:47:39.296697 ignition[910]: Stage: fetch Jan 24 00:47:39.296950 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:39.296964 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:39.297079 ignition[910]: parsed url from cmdline: "" Jan 24 00:47:39.297084 ignition[910]: no config URL provided Jan 24 00:47:39.297089 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:47:39.297099 ignition[910]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:47:39.297119 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 24 00:47:39.434827 ignition[910]: GET result: OK Jan 24 00:47:39.434937 ignition[910]: config has been read from IMDS userdata Jan 24 00:47:39.434978 ignition[910]: parsing config with SHA512: ffdb30ca0abd66e83d1b7275b877ebbd3020f22a37af6c4c82eb4eebf484c4312c1a487516214058261f4e0f1f70e69f367def80d71c889a6774be5975fde799 Jan 24 00:47:39.440209 unknown[910]: fetched base config from "system" Jan 24 00:47:39.440228 unknown[910]: fetched base config from "system" Jan 24 00:47:39.441577 ignition[910]: fetch: fetch complete Jan 24 00:47:39.440238 unknown[910]: fetched user config from "azure" Jan 24 00:47:39.441584 ignition[910]: fetch: fetch passed Jan 24 00:47:39.443585 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:47:39.441655 ignition[910]: Ignition finished successfully Jan 24 00:47:39.457908 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:47:39.473555 ignition[917]: Ignition 2.19.0 Jan 24 00:47:39.473567 ignition[917]: Stage: kargs Jan 24 00:47:39.473803 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:39.473817 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:39.478931 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:47:39.474734 ignition[917]: kargs: kargs passed Jan 24 00:47:39.474828 ignition[917]: Ignition finished successfully Jan 24 00:47:39.491621 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:47:39.503303 ignition[923]: Ignition 2.19.0 Jan 24 00:47:39.503314 ignition[923]: Stage: disks Jan 24 00:47:39.503534 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:39.506539 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:47:39.503547 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:39.509732 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:47:39.504469 ignition[923]: disks: disks passed Jan 24 00:47:39.514682 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:47:39.504512 ignition[923]: Ignition finished successfully Jan 24 00:47:39.515203 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:47:39.515611 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:47:39.516019 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:47:39.526985 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:47:39.613661 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 24 00:47:39.620174 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:47:39.630946 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:47:39.723780 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:47:39.724269 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:47:39.726933 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:47:39.771911 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:47:39.790786 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Jan 24 00:47:39.790851 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:47:39.796675 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:47:39.796733 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:47:39.797628 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:47:39.810793 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:47:39.808997 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 24 00:47:39.811934 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:47:39.811970 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:47:39.816338 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:47:39.818220 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:47:39.821935 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:47:39.874058 systemd-networkd[901]: eth0: Gained IPv6LL Jan 24 00:47:40.546383 coreos-metadata[958]: Jan 24 00:47:40.546 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 24 00:47:40.552750 coreos-metadata[958]: Jan 24 00:47:40.552 INFO Fetch successful Jan 24 00:47:40.556106 coreos-metadata[958]: Jan 24 00:47:40.552 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 24 00:47:40.563997 coreos-metadata[958]: Jan 24 00:47:40.563 INFO Fetch successful Jan 24 00:47:40.582090 coreos-metadata[958]: Jan 24 00:47:40.581 INFO wrote hostname ci-4081.3.6-n-d923855e69 to /sysroot/etc/hostname Jan 24 00:47:40.584072 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:47:40.657932 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:47:40.710687 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:47:40.719673 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:47:40.727030 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:47:41.749457 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:47:41.759881 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:47:41.763423 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:47:41.777927 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:47:41.783930 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:47:41.808284 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:47:41.817210 ignition[1060]: INFO : Ignition 2.19.0 Jan 24 00:47:41.817210 ignition[1060]: INFO : Stage: mount Jan 24 00:47:41.823689 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:41.823689 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:41.823689 ignition[1060]: INFO : mount: mount passed Jan 24 00:47:41.823689 ignition[1060]: INFO : Ignition finished successfully Jan 24 00:47:41.820091 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:47:41.841953 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:47:41.848895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:47:41.870785 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Jan 24 00:47:41.870840 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:47:41.874783 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:47:41.878441 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:47:41.885787 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:47:41.887461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:47:41.911655 ignition[1089]: INFO : Ignition 2.19.0 Jan 24 00:47:41.911655 ignition[1089]: INFO : Stage: files Jan 24 00:47:41.916445 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:41.916445 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:41.916445 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:47:41.916445 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:47:41.916445 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:47:42.010405 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:47:42.014492 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:47:42.014492 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:47:42.014492 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:47:42.014492 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:47:42.011050 unknown[1089]: wrote ssh authorized keys file for user: core Jan 24 00:47:42.031921 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:47:42.031921 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:47:42.059809 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 00:47:42.121752 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:47:42.556661 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 24 00:47:42.750574 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:47:42.750574 ignition[1089]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 24 00:47:42.784679 ignition[1089]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: files passed Jan 24 00:47:42.791022 ignition[1089]: INFO : Ignition finished successfully Jan 24 00:47:42.787333 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:47:42.845383 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:47:42.851810 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:47:42.855052 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:47:42.855174 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:47:42.870983 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:47:42.870983 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:47:42.879307 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:47:42.875534 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:47:42.882512 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:47:42.898921 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:47:42.928826 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:47:42.928940 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:47:42.935105 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:47:42.943409 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:47:42.945881 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:47:42.955940 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:47:42.969124 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:47:42.978245 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:47:42.990368 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:47:42.990546 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:47:42.991426 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:47:42.991826 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:47:42.991959 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:47:42.992636 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:47:42.993124 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:47:42.993496 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:47:42.993894 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:47:42.994307 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:47:42.994771 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:47:42.995255 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:47:42.995689 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:47:42.996108 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:47:42.996504 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:47:42.996872 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:47:42.997001 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:47:42.997720 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:47:42.998229 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:47:42.998700 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:47:43.036573 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:47:43.088394 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:47:43.090978 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:47:43.096300 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:47:43.099054 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:47:43.105443 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:47:43.107691 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:47:43.112504 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 24 00:47:43.115084 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:47:43.125950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:47:43.132035 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:47:43.134132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:47:43.134405 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:47:43.137383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:47:43.137532 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:47:43.149250 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:47:43.149367 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:47:43.165528 ignition[1141]: INFO : Ignition 2.19.0 Jan 24 00:47:43.165528 ignition[1141]: INFO : Stage: umount Jan 24 00:47:43.165528 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:43.165528 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:43.174287 ignition[1141]: INFO : umount: umount passed Jan 24 00:47:43.174287 ignition[1141]: INFO : Ignition finished successfully Jan 24 00:47:43.166977 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:47:43.167091 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:47:43.172240 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:47:43.172492 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:47:43.191818 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:47:43.191896 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:47:43.198856 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:47:43.198923 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:47:43.206078 systemd[1]: Stopped target network.target - Network. Jan 24 00:47:43.210141 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:47:43.212793 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:47:43.218312 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:47:43.222624 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:47:43.225818 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:47:43.234454 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:47:43.238713 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:47:43.243533 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:47:43.243595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:47:43.250027 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:47:43.250082 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:47:43.256874 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:47:43.259242 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:47:43.263875 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:47:43.263936 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:47:43.269110 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:47:43.274292 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:47:43.283101 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:47:43.286260 systemd-networkd[901]: eth0: DHCPv6 lease lost Jan 24 00:47:43.289084 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:47:43.289205 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:47:43.296101 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:47:43.296151 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:47:43.308864 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:47:43.313389 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:47:43.313459 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:47:43.317263 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:47:43.326169 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:47:43.329319 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:47:43.345091 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:47:43.345265 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:47:43.355216 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:47:43.355277 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:47:43.360412 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:47:43.360456 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:47:43.365749 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:47:43.365814 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:47:43.378237 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:47:43.378315 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:47:43.383330 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:47:43.383381 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:47:43.398779 kernel: hv_netvsc 7c1e521d-addc-7c1e-521d-addc7c1e521d eth0: Data path switched from VF: enP44734s1 Jan 24 00:47:43.402926 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:47:43.405566 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:47:43.405636 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:47:43.413706 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:47:43.413787 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:47:43.419283 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:47:43.419335 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:47:43.424940 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:47:43.424996 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:47:43.430491 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:47:43.430548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:47:43.436297 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:47:43.436352 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:47:43.442352 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:47:43.442406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:43.448254 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:47:43.448348 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:47:43.454293 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:47:43.454380 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:47:43.675670 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:47:43.675858 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:47:43.681008 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:47:43.685702 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:47:43.685782 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:47:43.701938 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:47:43.873565 systemd[1]: Switching root. Jan 24 00:47:43.909903 systemd-journald[177]: Journal stopped Jan 24 00:47:34.065712 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:47:34.065740 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:47:34.065750 kernel: BIOS-provided physical RAM map: Jan 24 00:47:34.065758 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:47:34.065766 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 24 00:47:34.065772 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 24 00:47:34.065780 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 24 00:47:34.065791 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 24 00:47:34.065800 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 24 00:47:34.065807 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 24 00:47:34.065813 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 24 00:47:34.065823 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 24 00:47:34.065829 kernel: printk: bootconsole [earlyser0] enabled Jan 24 00:47:34.065836 kernel: NX (Execute Disable) protection: active Jan 24 00:47:34.065849 kernel: APIC: Static calls initialized Jan 24 00:47:34.067935 kernel: efi: EFI v2.7 by Microsoft Jan 24 00:47:34.067963 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee73a98 Jan 24 00:47:34.067979 kernel: SMBIOS 3.1.0 present. Jan 24 00:47:34.067991 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 24 00:47:34.068005 kernel: Hypervisor detected: Microsoft Hyper-V Jan 24 00:47:34.068018 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 24 00:47:34.068031 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jan 24 00:47:34.068044 kernel: Hyper-V: Nested features: 0x1e0101 Jan 24 00:47:34.068057 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 24 00:47:34.068074 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 24 00:47:34.068086 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:47:34.068099 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:47:34.068113 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 24 00:47:34.068125 kernel: tsc: Detected 2593.907 MHz processor Jan 24 00:47:34.068137 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:47:34.068150 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:47:34.068162 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 24 00:47:34.068175 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:47:34.068191 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:47:34.068203 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 24 00:47:34.068216 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 24 00:47:34.068228 kernel: Using GB pages for direct mapping Jan 24 00:47:34.068241 kernel: Secure boot disabled Jan 24 00:47:34.068252 kernel: ACPI: Early table checksum verification disabled Jan 24 00:47:34.068265 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 24 00:47:34.068283 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068298 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068312 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 24 00:47:34.068327 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 24 00:47:34.068341 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068356 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068370 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068387 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068402 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068415 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068429 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:47:34.068442 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 24 00:47:34.068456 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 24 00:47:34.068469 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 24 00:47:34.068483 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 24 00:47:34.068499 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 24 00:47:34.068512 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 24 00:47:34.068525 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 24 00:47:34.068538 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 24 00:47:34.068551 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 24 00:47:34.068564 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 24 00:47:34.068577 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:47:34.068590 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:47:34.068603 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 24 00:47:34.068619 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 24 00:47:34.068632 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 24 00:47:34.068646 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 24 00:47:34.068659 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 24 00:47:34.068672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 24 00:47:34.068685 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 24 00:47:34.068699 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 24 00:47:34.068712 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 24 00:47:34.068726 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 24 00:47:34.068742 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 24 00:47:34.068756 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 24 00:47:34.068770 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 24 00:47:34.068784 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 24 00:47:34.068797 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 24 00:47:34.068811 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 24 00:47:34.068824 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 24 00:47:34.068838 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 24 00:47:34.068852 kernel: Zone ranges: Jan 24 00:47:34.068890 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:47:34.068904 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:47:34.068917 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:47:34.068931 kernel: Movable zone start for each node Jan 24 00:47:34.068944 kernel: Early memory node ranges Jan 24 00:47:34.068958 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:47:34.068971 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 24 00:47:34.068985 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 24 00:47:34.068998 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:47:34.069015 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 24 00:47:34.069028 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:47:34.069042 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:47:34.069055 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 24 00:47:34.069069 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 24 00:47:34.069083 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 24 00:47:34.069097 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:47:34.069111 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:47:34.069124 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:47:34.069141 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 24 00:47:34.069154 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:47:34.069168 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 24 00:47:34.069181 kernel: Booting paravirtualized kernel on Hyper-V Jan 24 00:47:34.069196 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:47:34.069210 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:47:34.069224 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:47:34.069237 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:47:34.069250 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:47:34.069267 kernel: Hyper-V: PV spinlocks enabled Jan 24 00:47:34.069280 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:47:34.069296 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:47:34.069308 kernel: random: crng init done Jan 24 00:47:34.069321 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 24 00:47:34.069337 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:47:34.069352 kernel: Fallback order for Node 0: 0 Jan 24 00:47:34.069368 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 24 00:47:34.069387 kernel: Policy zone: Normal Jan 24 00:47:34.069415 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:47:34.069431 kernel: software IO TLB: area num 2. Jan 24 00:47:34.069452 kernel: Memory: 8077080K/8387460K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 310120K reserved, 0K cma-reserved) Jan 24 00:47:34.069468 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:47:34.069485 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:47:34.069500 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:47:34.069515 kernel: Dynamic Preempt: voluntary Jan 24 00:47:34.069531 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:47:34.069547 kernel: rcu: RCU event tracing is enabled. Jan 24 00:47:34.069566 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:47:34.069580 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:47:34.069595 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:47:34.069610 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:47:34.069624 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:47:34.069638 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:47:34.069655 kernel: Using NULL legacy PIC Jan 24 00:47:34.069669 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 24 00:47:34.069683 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:47:34.069697 kernel: Console: colour dummy device 80x25 Jan 24 00:47:34.069711 kernel: printk: console [tty1] enabled Jan 24 00:47:34.069726 kernel: printk: console [ttyS0] enabled Jan 24 00:47:34.069740 kernel: printk: bootconsole [earlyser0] disabled Jan 24 00:47:34.069754 kernel: ACPI: Core revision 20230628 Jan 24 00:47:34.069768 kernel: Failed to register legacy timer interrupt Jan 24 00:47:34.069782 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:47:34.069800 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 24 00:47:34.069814 kernel: Hyper-V: Using IPI hypercalls Jan 24 00:47:34.069829 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 24 00:47:34.069843 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 24 00:47:34.070905 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 24 00:47:34.070921 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 24 00:47:34.070931 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 24 00:47:34.070940 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 24 00:47:34.070951 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 24 00:47:34.070967 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:47:34.070978 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:47:34.070989 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:47:34.070998 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:47:34.071008 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:47:34.071019 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:47:34.071028 kernel: RETBleed: Vulnerable Jan 24 00:47:34.071039 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:47:34.071047 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:47:34.071057 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:47:34.071069 kernel: active return thunk: its_return_thunk Jan 24 00:47:34.071079 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:47:34.071089 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:47:34.071097 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:47:34.071108 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:47:34.071116 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:47:34.071128 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:47:34.071136 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:47:34.071146 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:47:34.071156 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:47:34.071164 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:47:34.071177 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:47:34.071185 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 24 00:47:34.071196 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:47:34.071204 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:47:34.071214 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:47:34.071224 kernel: landlock: Up and running. Jan 24 00:47:34.071233 kernel: SELinux: Initializing. Jan 24 00:47:34.071243 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:47:34.071251 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:47:34.071263 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:47:34.071271 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:47:34.071284 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:47:34.071293 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:47:34.071304 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:47:34.071312 kernel: signal: max sigframe size: 3632 Jan 24 00:47:34.071322 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:47:34.071332 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:47:34.071340 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:47:34.071352 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:47:34.071363 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:47:34.071376 kernel: .... node #0, CPUs: #1 Jan 24 00:47:34.071386 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 24 00:47:34.071397 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:47:34.071408 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:47:34.071418 kernel: smpboot: Max logical packages: 1 Jan 24 00:47:34.071427 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 24 00:47:34.071436 kernel: devtmpfs: initialized Jan 24 00:47:34.071447 kernel: x86/mm: Memory block size: 128MB Jan 24 00:47:34.071459 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 24 00:47:34.071469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:47:34.071477 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:47:34.071489 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:47:34.071497 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:47:34.071508 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:47:34.071517 kernel: audit: type=2000 audit(1769215652.029:1): state=initialized audit_enabled=0 res=1 Jan 24 00:47:34.071525 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:47:34.071533 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:47:34.071543 kernel: cpuidle: using governor menu Jan 24 00:47:34.071552 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:47:34.071563 kernel: dca service started, version 1.12.1 Jan 24 00:47:34.071571 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 24 00:47:34.071579 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:47:34.071588 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:47:34.071596 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:47:34.071606 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:47:34.071616 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:47:34.071626 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:47:34.071638 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:47:34.071646 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:47:34.071657 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:47:34.071666 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:47:34.071676 kernel: ACPI: Interpreter enabled Jan 24 00:47:34.071685 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:47:34.071693 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:47:34.071705 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:47:34.071716 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 24 00:47:34.071728 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 24 00:47:34.071736 kernel: iommu: Default domain type: Translated Jan 24 00:47:34.071744 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:47:34.071755 kernel: efivars: Registered efivars operations Jan 24 00:47:34.071763 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:47:34.071772 kernel: PCI: System does not support PCI Jan 24 00:47:34.071783 kernel: vgaarb: loaded Jan 24 00:47:34.071791 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 24 00:47:34.071804 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:47:34.071812 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:47:34.071822 kernel: pnp: PnP ACPI init Jan 24 00:47:34.071832 kernel: pnp: PnP ACPI: found 3 devices Jan 24 00:47:34.071843 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:47:34.071851 kernel: NET: Registered PF_INET protocol family Jan 24 00:47:34.071869 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:47:34.071881 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 24 00:47:34.071891 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:47:34.071906 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:47:34.071914 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 24 00:47:34.071925 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 24 00:47:34.071934 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:47:34.071942 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:47:34.071952 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:47:34.071962 kernel: NET: Registered PF_XDP protocol family Jan 24 00:47:34.071970 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:47:34.071980 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:47:34.071992 kernel: software IO TLB: mapped [mem 0x000000003ae73000-0x000000003ee73000] (64MB) Jan 24 00:47:34.072001 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:47:34.072012 kernel: Initialise system trusted keyrings Jan 24 00:47:34.072021 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 24 00:47:34.072032 kernel: Key type asymmetric registered Jan 24 00:47:34.072040 kernel: Asymmetric key parser 'x509' registered Jan 24 00:47:34.072050 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:47:34.072060 kernel: io scheduler mq-deadline registered Jan 24 00:47:34.072068 kernel: io scheduler kyber registered Jan 24 00:47:34.072081 kernel: io scheduler bfq registered Jan 24 00:47:34.072089 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:47:34.072101 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:47:34.072109 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:47:34.072117 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 24 00:47:34.072127 kernel: i8042: PNP: No PS/2 controller found. Jan 24 00:47:34.072270 kernel: rtc_cmos 00:02: registered as rtc0 Jan 24 00:47:34.072377 kernel: rtc_cmos 00:02: setting system clock to 2026-01-24T00:47:33 UTC (1769215653) Jan 24 00:47:34.072477 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 24 00:47:34.072490 kernel: intel_pstate: CPU model not supported Jan 24 00:47:34.072499 kernel: efifb: probing for efifb Jan 24 00:47:34.072510 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 24 00:47:34.072518 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 24 00:47:34.072527 kernel: efifb: scrolling: redraw Jan 24 00:47:34.072535 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:47:34.072546 kernel: Console: switching to colour frame buffer device 128x48 Jan 24 00:47:34.072554 kernel: fb0: EFI VGA frame buffer device Jan 24 00:47:34.072567 kernel: pstore: Using crash dump compression: deflate Jan 24 00:47:34.072577 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:47:34.072585 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:47:34.072595 kernel: Segment Routing with IPv6 Jan 24 00:47:34.072604 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:47:34.072612 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:47:34.072622 kernel: Key type dns_resolver registered Jan 24 00:47:34.072632 kernel: IPI shorthand broadcast: enabled Jan 24 00:47:34.072641 kernel: sched_clock: Marking stable (840002600, 44460800)->(1075423800, -190960400) Jan 24 00:47:34.072654 kernel: registered taskstats version 1 Jan 24 00:47:34.072665 kernel: Loading compiled-in X.509 certificates Jan 24 00:47:34.072674 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:47:34.072687 kernel: Key type .fscrypt registered Jan 24 00:47:34.072696 kernel: Key type fscrypt-provisioning registered Jan 24 00:47:34.072707 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:47:34.072716 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:47:34.072724 kernel: ima: No architecture policies found Jan 24 00:47:34.072735 kernel: clk: Disabling unused clocks Jan 24 00:47:34.072747 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:47:34.072758 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:47:34.072769 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:47:34.072778 kernel: Run /init as init process Jan 24 00:47:34.072789 kernel: with arguments: Jan 24 00:47:34.072797 kernel: /init Jan 24 00:47:34.072807 kernel: with environment: Jan 24 00:47:34.072818 kernel: HOME=/ Jan 24 00:47:34.072826 kernel: TERM=linux Jan 24 00:47:34.072839 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:47:34.072852 systemd[1]: Detected virtualization microsoft. Jan 24 00:47:34.075885 systemd[1]: Detected architecture x86-64. Jan 24 00:47:34.075898 systemd[1]: Running in initrd. Jan 24 00:47:34.075910 systemd[1]: No hostname configured, using default hostname. Jan 24 00:47:34.075919 systemd[1]: Hostname set to . Jan 24 00:47:34.075930 systemd[1]: Initializing machine ID from random generator. Jan 24 00:47:34.075944 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:47:34.075956 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:47:34.075965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:47:34.075977 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:47:34.075987 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:47:34.075996 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:47:34.076008 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:47:34.076022 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:47:34.076033 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:47:34.076042 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:47:34.076054 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:47:34.076063 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:47:34.076074 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:47:34.076083 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:47:34.076094 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:47:34.076106 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:47:34.076117 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:47:34.076127 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:47:34.076136 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:47:34.076147 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:47:34.076156 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:47:34.076168 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:47:34.076177 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:47:34.076188 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:47:34.076203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:47:34.076212 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:47:34.076223 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:47:34.076233 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:47:34.076242 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:47:34.076277 systemd-journald[177]: Collecting audit messages is disabled. Jan 24 00:47:34.076304 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:34.076317 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:47:34.076326 systemd-journald[177]: Journal started Jan 24 00:47:34.076349 systemd-journald[177]: Runtime Journal (/run/log/journal/6f90b73944f846b3ab51321c0a93bc66) is 8.0M, max 158.8M, 150.8M free. Jan 24 00:47:34.085398 systemd-modules-load[178]: Inserted module 'overlay' Jan 24 00:47:34.093057 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:47:34.096158 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:47:34.102799 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:47:34.121040 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:47:34.124985 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:47:34.141967 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:47:34.142764 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:47:34.152053 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 24 00:47:34.154600 kernel: Bridge firewalling registered Jan 24 00:47:34.154916 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:34.160740 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:47:34.171076 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:47:34.186028 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:47:34.206037 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:47:34.210171 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:47:34.220311 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:47:34.227015 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:47:34.227253 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:47:34.237126 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:47:34.246405 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:47:34.252347 dracut-cmdline[213]: dracut-dracut-053 Jan 24 00:47:34.254920 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:47:34.312201 systemd-resolved[218]: Positive Trust Anchors: Jan 24 00:47:34.312223 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:47:34.312268 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:47:34.337784 systemd-resolved[218]: Defaulting to hostname 'linux'. Jan 24 00:47:34.339036 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:47:34.347557 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:47:34.355875 kernel: SCSI subsystem initialized Jan 24 00:47:34.366875 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:47:34.376879 kernel: iscsi: registered transport (tcp) Jan 24 00:47:34.398225 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:47:34.398286 kernel: QLogic iSCSI HBA Driver Jan 24 00:47:34.434709 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:47:34.446982 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:47:34.478285 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:47:34.478369 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:47:34.481647 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:47:34.521887 kernel: raid6: avx512x4 gen() 18480 MB/s Jan 24 00:47:34.540876 kernel: raid6: avx512x2 gen() 18404 MB/s Jan 24 00:47:34.559872 kernel: raid6: avx512x1 gen() 18367 MB/s Jan 24 00:47:34.578870 kernel: raid6: avx2x4 gen() 18366 MB/s Jan 24 00:47:34.597877 kernel: raid6: avx2x2 gen() 18331 MB/s Jan 24 00:47:34.617669 kernel: raid6: avx2x1 gen() 13943 MB/s Jan 24 00:47:34.617706 kernel: raid6: using algorithm avx512x4 gen() 18480 MB/s Jan 24 00:47:34.638708 kernel: raid6: .... xor() 8260 MB/s, rmw enabled Jan 24 00:47:34.638735 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:47:34.661884 kernel: xor: automatically using best checksumming function avx Jan 24 00:47:34.808887 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:47:34.818696 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:47:34.827027 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:47:34.840819 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jan 24 00:47:34.845424 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:47:34.861006 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:47:34.872893 dracut-pre-trigger[400]: rd.md=0: removing MD RAID activation Jan 24 00:47:34.900700 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:47:34.910004 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:47:34.951369 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:47:34.963058 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:47:34.977673 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:47:34.987448 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:47:34.994068 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:47:34.997114 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:47:35.016068 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:47:35.033883 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:47:35.056890 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:47:35.057312 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:47:35.068805 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:47:35.079138 kernel: AES CTR mode by8 optimization enabled Jan 24 00:47:35.069031 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:47:35.082693 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:47:35.088924 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:47:35.089189 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:35.103623 kernel: hv_vmbus: Vmbus version:5.2 Jan 24 00:47:35.107825 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:35.128876 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 24 00:47:35.124337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:35.133383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:47:35.142016 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 24 00:47:35.142040 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 24 00:47:35.136100 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:35.156941 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 24 00:47:35.157266 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:35.173884 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:47:35.182939 kernel: hv_vmbus: registering driver hv_storvsc Jan 24 00:47:35.186877 kernel: hv_vmbus: registering driver hid_hyperv Jan 24 00:47:35.191870 kernel: scsi host1: storvsc_host_t Jan 24 00:47:35.192062 kernel: scsi host0: storvsc_host_t Jan 24 00:47:35.192185 kernel: PTP clock support registered Jan 24 00:47:35.198875 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 24 00:47:35.198910 kernel: hv_utils: Registering HyperV Utility Driver Jan 24 00:47:35.202029 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 24 00:47:35.202069 kernel: hv_vmbus: registering driver hv_utils Jan 24 00:47:35.213026 kernel: hv_utils: Shutdown IC version 3.2 Jan 24 00:47:35.213062 kernel: hv_utils: Heartbeat IC version 3.0 Jan 24 00:47:35.213085 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 24 00:47:35.213233 kernel: hv_utils: TimeSync IC version 4.0 Jan 24 00:47:35.334357 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 24 00:47:35.332184 systemd-resolved[218]: Clock change detected. Flushing caches. Jan 24 00:47:35.349884 kernel: hv_vmbus: registering driver hv_netvsc Jan 24 00:47:35.342981 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:35.353996 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:47:35.372688 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 24 00:47:35.373033 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:47:35.378812 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 24 00:47:35.400173 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:47:35.415838 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 24 00:47:35.416073 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 24 00:47:35.416206 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:47:35.416350 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 24 00:47:35.416462 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 24 00:47:35.419780 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:47:35.422787 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:47:35.590790 kernel: hv_netvsc 7c1e521d-addc-7c1e-521d-addc7c1e521d eth0: VF slot 1 added Jan 24 00:47:35.599751 kernel: hv_vmbus: registering driver hv_pci Jan 24 00:47:35.599805 kernel: hv_pci ba335987-aebe-493f-9405-6d4ab83a95b7: PCI VMBus probing: Using version 0x10004 Jan 24 00:47:35.608541 kernel: hv_pci ba335987-aebe-493f-9405-6d4ab83a95b7: PCI host bridge to bus aebe:00 Jan 24 00:47:35.608823 kernel: pci_bus aebe:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 24 00:47:35.611593 kernel: pci_bus aebe:00: No busn resource found for root bus, will use [bus 00-ff] Jan 24 00:47:35.615833 kernel: pci aebe:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 24 00:47:35.623758 kernel: pci aebe:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:47:35.628134 kernel: pci aebe:00:02.0: enabling Extended Tags Jan 24 00:47:35.639815 kernel: pci aebe:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at aebe:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 24 00:47:35.646039 kernel: pci_bus aebe:00: busn_res: [bus 00-ff] end is updated to 00 Jan 24 00:47:35.646340 kernel: pci aebe:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:47:35.821151 kernel: mlx5_core aebe:00:02.0: enabling device (0000 -> 0002) Jan 24 00:47:35.825791 kernel: mlx5_core aebe:00:02.0: firmware version: 14.30.5026 Jan 24 00:47:36.001933 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 24 00:47:36.021791 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (467) Jan 24 00:47:36.041049 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (447) Jan 24 00:47:36.050543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 24 00:47:36.063939 kernel: hv_netvsc 7c1e521d-addc-7c1e-521d-addc7c1e521d eth0: VF registering: eth1 Jan 24 00:47:36.071092 kernel: mlx5_core aebe:00:02.0 eth1: joined to eth0 Jan 24 00:47:36.071348 kernel: mlx5_core aebe:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 24 00:47:36.071856 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 24 00:47:36.084025 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 24 00:47:36.092231 kernel: mlx5_core aebe:00:02.0 enP44734s1: renamed from eth1 Jan 24 00:47:36.092242 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 24 00:47:36.106970 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:47:36.129782 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:47:36.139781 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:47:36.148810 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:47:37.159783 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:47:37.160458 disk-uuid[603]: The operation has completed successfully. Jan 24 00:47:37.263265 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:47:37.263393 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:47:37.292915 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:47:37.299855 sh[716]: Success Jan 24 00:47:37.333801 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:47:37.668051 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:47:37.670865 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:47:37.679109 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:47:37.695784 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:47:37.695822 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:47:37.701585 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:47:37.704660 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:47:37.707170 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:47:38.079334 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:47:38.080321 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:47:38.090019 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:47:38.095938 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:47:38.117900 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:47:38.117959 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:47:38.120418 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:47:38.165792 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:47:38.177285 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:47:38.182867 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:47:38.193778 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:47:38.199266 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:47:38.209980 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:47:38.217736 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:47:38.244920 systemd-networkd[901]: lo: Link UP Jan 24 00:47:38.244929 systemd-networkd[901]: lo: Gained carrier Jan 24 00:47:38.247147 systemd-networkd[901]: Enumeration completed Jan 24 00:47:38.247597 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:47:38.249010 systemd[1]: Reached target network.target - Network. Jan 24 00:47:38.249740 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:47:38.249744 systemd-networkd[901]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:47:38.304790 kernel: mlx5_core aebe:00:02.0 enP44734s1: Link up Jan 24 00:47:38.338393 kernel: hv_netvsc 7c1e521d-addc-7c1e-521d-addc7c1e521d eth0: Data path switched to VF: enP44734s1 Jan 24 00:47:38.337814 systemd-networkd[901]: enP44734s1: Link UP Jan 24 00:47:38.338001 systemd-networkd[901]: eth0: Link UP Jan 24 00:47:38.338247 systemd-networkd[901]: eth0: Gained carrier Jan 24 00:47:38.338261 systemd-networkd[901]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:47:38.342966 systemd-networkd[901]: enP44734s1: Gained carrier Jan 24 00:47:38.370813 systemd-networkd[901]: eth0: DHCPv4 address 10.200.4.5/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:47:39.249573 ignition[899]: Ignition 2.19.0 Jan 24 00:47:39.249586 ignition[899]: Stage: fetch-offline Jan 24 00:47:39.249631 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:39.249642 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:39.249777 ignition[899]: parsed url from cmdline: "" Jan 24 00:47:39.249783 ignition[899]: no config URL provided Jan 24 00:47:39.249791 ignition[899]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:47:39.249803 ignition[899]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:47:39.249810 ignition[899]: failed to fetch config: resource requires networking Jan 24 00:47:39.251567 ignition[899]: Ignition finished successfully Jan 24 00:47:39.269984 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:47:39.279025 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:47:39.296686 ignition[910]: Ignition 2.19.0 Jan 24 00:47:39.296697 ignition[910]: Stage: fetch Jan 24 00:47:39.296950 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:39.296964 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:39.297079 ignition[910]: parsed url from cmdline: "" Jan 24 00:47:39.297084 ignition[910]: no config URL provided Jan 24 00:47:39.297089 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:47:39.297099 ignition[910]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:47:39.297119 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 24 00:47:39.434827 ignition[910]: GET result: OK Jan 24 00:47:39.434937 ignition[910]: config has been read from IMDS userdata Jan 24 00:47:39.434978 ignition[910]: parsing config with SHA512: ffdb30ca0abd66e83d1b7275b877ebbd3020f22a37af6c4c82eb4eebf484c4312c1a487516214058261f4e0f1f70e69f367def80d71c889a6774be5975fde799 Jan 24 00:47:39.440209 unknown[910]: fetched base config from "system" Jan 24 00:47:39.440228 unknown[910]: fetched base config from "system" Jan 24 00:47:39.441577 ignition[910]: fetch: fetch complete Jan 24 00:47:39.440238 unknown[910]: fetched user config from "azure" Jan 24 00:47:39.441584 ignition[910]: fetch: fetch passed Jan 24 00:47:39.443585 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:47:39.441655 ignition[910]: Ignition finished successfully Jan 24 00:47:39.457908 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:47:39.473555 ignition[917]: Ignition 2.19.0 Jan 24 00:47:39.473567 ignition[917]: Stage: kargs Jan 24 00:47:39.473803 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:39.473817 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:39.478931 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:47:39.474734 ignition[917]: kargs: kargs passed Jan 24 00:47:39.474828 ignition[917]: Ignition finished successfully Jan 24 00:47:39.491621 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:47:39.503303 ignition[923]: Ignition 2.19.0 Jan 24 00:47:39.503314 ignition[923]: Stage: disks Jan 24 00:47:39.503534 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:39.506539 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:47:39.503547 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:39.509732 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:47:39.504469 ignition[923]: disks: disks passed Jan 24 00:47:39.514682 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:47:39.504512 ignition[923]: Ignition finished successfully Jan 24 00:47:39.515203 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:47:39.515611 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:47:39.516019 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:47:39.526985 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:47:39.613661 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 24 00:47:39.620174 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:47:39.630946 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:47:39.723780 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:47:39.724269 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:47:39.726933 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:47:39.771911 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:47:39.790786 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Jan 24 00:47:39.790851 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:47:39.796675 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:47:39.796733 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:47:39.797628 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:47:39.810793 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:47:39.808997 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 24 00:47:39.811934 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:47:39.811970 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:47:39.816338 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:47:39.818220 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:47:39.821935 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:47:39.874058 systemd-networkd[901]: eth0: Gained IPv6LL Jan 24 00:47:40.546383 coreos-metadata[958]: Jan 24 00:47:40.546 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 24 00:47:40.552750 coreos-metadata[958]: Jan 24 00:47:40.552 INFO Fetch successful Jan 24 00:47:40.556106 coreos-metadata[958]: Jan 24 00:47:40.552 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 24 00:47:40.563997 coreos-metadata[958]: Jan 24 00:47:40.563 INFO Fetch successful Jan 24 00:47:40.582090 coreos-metadata[958]: Jan 24 00:47:40.581 INFO wrote hostname ci-4081.3.6-n-d923855e69 to /sysroot/etc/hostname Jan 24 00:47:40.584072 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:47:40.657932 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:47:40.710687 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:47:40.719673 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:47:40.727030 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:47:41.749457 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:47:41.759881 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:47:41.763423 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:47:41.777927 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:47:41.783930 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:47:41.808284 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:47:41.817210 ignition[1060]: INFO : Ignition 2.19.0 Jan 24 00:47:41.817210 ignition[1060]: INFO : Stage: mount Jan 24 00:47:41.823689 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:41.823689 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:41.823689 ignition[1060]: INFO : mount: mount passed Jan 24 00:47:41.823689 ignition[1060]: INFO : Ignition finished successfully Jan 24 00:47:41.820091 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:47:41.841953 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:47:41.848895 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:47:41.870785 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Jan 24 00:47:41.870840 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:47:41.874783 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:47:41.878441 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:47:41.885787 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:47:41.887461 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:47:41.911655 ignition[1089]: INFO : Ignition 2.19.0 Jan 24 00:47:41.911655 ignition[1089]: INFO : Stage: files Jan 24 00:47:41.916445 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:41.916445 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:41.916445 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:47:41.916445 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:47:41.916445 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:47:42.010405 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:47:42.014492 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:47:42.014492 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:47:42.014492 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:47:42.014492 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 24 00:47:42.011050 unknown[1089]: wrote ssh authorized keys file for user: core Jan 24 00:47:42.031921 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:47:42.031921 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 24 00:47:42.059809 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 24 00:47:42.121752 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:47:42.127138 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 24 00:47:42.556661 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 24 00:47:42.750574 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 24 00:47:42.750574 ignition[1089]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 24 00:47:42.784679 ignition[1089]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:47:42.791022 ignition[1089]: INFO : files: files passed Jan 24 00:47:42.791022 ignition[1089]: INFO : Ignition finished successfully Jan 24 00:47:42.787333 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:47:42.845383 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:47:42.851810 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:47:42.855052 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:47:42.855174 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:47:42.870983 initrd-setup-root-after-ignition[1117]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:47:42.870983 initrd-setup-root-after-ignition[1117]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:47:42.879307 initrd-setup-root-after-ignition[1121]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:47:42.875534 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:47:42.882512 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:47:42.898921 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:47:42.928826 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:47:42.928940 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:47:42.935105 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:47:42.943409 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:47:42.945881 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:47:42.955940 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:47:42.969124 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:47:42.978245 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:47:42.990368 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:47:42.990546 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:47:42.991426 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:47:42.991826 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:47:42.991959 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:47:42.992636 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:47:42.993124 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:47:42.993496 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:47:42.993894 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:47:42.994307 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:47:42.994771 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:47:42.995255 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:47:42.995689 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:47:42.996108 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:47:42.996504 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:47:42.996872 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:47:42.997001 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:47:42.997720 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:47:42.998229 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:47:42.998700 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:47:43.036573 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:47:43.088394 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:47:43.090978 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:47:43.096300 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:47:43.099054 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:47:43.105443 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:47:43.107691 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:47:43.112504 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 24 00:47:43.115084 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:47:43.125950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:47:43.132035 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:47:43.134132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:47:43.134405 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:47:43.137383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:47:43.137532 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:47:43.149250 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:47:43.149367 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:47:43.165528 ignition[1141]: INFO : Ignition 2.19.0 Jan 24 00:47:43.165528 ignition[1141]: INFO : Stage: umount Jan 24 00:47:43.165528 ignition[1141]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:47:43.165528 ignition[1141]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:47:43.174287 ignition[1141]: INFO : umount: umount passed Jan 24 00:47:43.174287 ignition[1141]: INFO : Ignition finished successfully Jan 24 00:47:43.166977 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:47:43.167091 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:47:43.172240 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:47:43.172492 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:47:43.191818 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:47:43.191896 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:47:43.198856 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:47:43.198923 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:47:43.206078 systemd[1]: Stopped target network.target - Network. Jan 24 00:47:43.210141 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:47:43.212793 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:47:43.218312 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:47:43.222624 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:47:43.225818 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:47:43.234454 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:47:43.238713 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:47:43.243533 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:47:43.243595 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:47:43.250027 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:47:43.250082 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:47:43.256874 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:47:43.259242 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:47:43.263875 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:47:43.263936 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:47:43.269110 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:47:43.274292 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:47:43.283101 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:47:43.286260 systemd-networkd[901]: eth0: DHCPv6 lease lost Jan 24 00:47:43.289084 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:47:43.289205 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:47:43.296101 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:47:43.296151 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:47:43.308864 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:47:43.313389 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:47:43.313459 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:47:43.317263 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:47:43.326169 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:47:43.329319 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:47:43.345091 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:47:43.345265 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:47:43.355216 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:47:43.355277 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:47:43.360412 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:47:43.360456 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:47:43.365749 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:47:43.365814 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:47:43.378237 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:47:43.378315 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:47:43.383330 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:47:43.383381 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:47:43.398779 kernel: hv_netvsc 7c1e521d-addc-7c1e-521d-addc7c1e521d eth0: Data path switched from VF: enP44734s1 Jan 24 00:47:43.402926 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:47:43.405566 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:47:43.405636 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:47:43.413706 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:47:43.413787 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:47:43.419283 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:47:43.419335 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:47:43.424940 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 24 00:47:43.424996 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:47:43.430491 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:47:43.430548 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:47:43.436297 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:47:43.436352 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:47:43.442352 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:47:43.442406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:43.448254 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:47:43.448348 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:47:43.454293 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:47:43.454380 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:47:43.675670 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:47:43.675858 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:47:43.681008 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:47:43.685702 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:47:43.685782 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:47:43.701938 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:47:43.873565 systemd[1]: Switching root. Jan 24 00:47:43.909903 systemd-journald[177]: Journal stopped Jan 24 00:47:49.269434 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 24 00:47:49.269477 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:47:49.269495 kernel: SELinux: policy capability open_perms=1 Jan 24 00:47:49.269508 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:47:49.269522 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:47:49.269536 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:47:49.269552 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:47:49.269570 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:47:49.269585 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:47:49.269600 kernel: audit: type=1403 audit(1769215666.174:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:47:49.269616 systemd[1]: Successfully loaded SELinux policy in 140.484ms. Jan 24 00:47:49.269634 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.535ms. Jan 24 00:47:49.269652 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:47:49.269669 systemd[1]: Detected virtualization microsoft. Jan 24 00:47:49.269690 systemd[1]: Detected architecture x86-64. Jan 24 00:47:49.269707 systemd[1]: Detected first boot. Jan 24 00:47:49.269724 systemd[1]: Hostname set to . Jan 24 00:47:49.269741 systemd[1]: Initializing machine ID from random generator. Jan 24 00:47:49.269757 zram_generator::config[1200]: No configuration found. Jan 24 00:47:49.269871 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:47:49.269887 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:47:49.269897 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:47:49.269910 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:47:49.269923 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:47:49.269936 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:47:49.269949 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:47:49.269967 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:47:49.269981 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:47:49.269995 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:47:49.270009 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:47:49.270025 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:47:49.270040 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:47:49.270054 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:47:49.270072 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:47:49.270088 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:47:49.270104 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:47:49.270121 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:47:49.270138 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:47:49.270157 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:47:49.270176 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:47:49.270199 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:47:49.270217 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:47:49.270240 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:47:49.270257 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:47:49.270272 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:47:49.270286 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:47:49.270300 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:47:49.270313 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:47:49.270326 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:47:49.270341 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:47:49.270352 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:47:49.270365 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:47:49.270376 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:47:49.270389 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:47:49.270402 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:49.270412 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:47:49.270425 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:47:49.270435 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:47:49.270446 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:47:49.270458 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:47:49.270470 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:47:49.270483 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:47:49.270495 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:47:49.270505 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:47:49.270516 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:47:49.270529 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:47:49.270539 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:47:49.270552 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:47:49.270562 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 24 00:47:49.270576 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 24 00:47:49.270588 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:47:49.270601 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:47:49.270612 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:47:49.270624 kernel: fuse: init (API version 7.39) Jan 24 00:47:49.270635 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:47:49.270647 kernel: loop: module loaded Jan 24 00:47:49.270657 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:47:49.270669 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:49.270681 kernel: ACPI: bus type drm_connector registered Jan 24 00:47:49.270715 systemd-journald[1311]: Collecting audit messages is disabled. Jan 24 00:47:49.270739 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:47:49.270752 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:47:49.270778 systemd-journald[1311]: Journal started Jan 24 00:47:49.270805 systemd-journald[1311]: Runtime Journal (/run/log/journal/44a43b90086a4c74b26d7b421b2c1242) is 8.0M, max 158.8M, 150.8M free. Jan 24 00:47:49.282689 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:47:49.283721 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:47:49.286834 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:47:49.289812 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:47:49.292823 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:47:49.295794 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:47:49.299511 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:47:49.303290 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:47:49.303534 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:47:49.307412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:47:49.307636 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:47:49.310905 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:47:49.311126 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:47:49.314950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:47:49.315186 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:47:49.318829 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:47:49.319025 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:47:49.322456 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:47:49.322997 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:47:49.326481 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:47:49.330414 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:47:49.334255 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:47:49.355247 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:47:49.363889 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:47:49.375859 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:47:49.378634 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:47:49.384940 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:47:49.395972 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:47:49.399925 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:47:49.400948 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:47:49.403902 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:47:49.406075 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:47:49.410937 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:47:49.422400 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:47:49.426218 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:47:49.435936 systemd-journald[1311]: Time spent on flushing to /var/log/journal/44a43b90086a4c74b26d7b421b2c1242 is 24.388ms for 946 entries. Jan 24 00:47:49.435936 systemd-journald[1311]: System Journal (/var/log/journal/44a43b90086a4c74b26d7b421b2c1242) is 8.0M, max 2.6G, 2.6G free. Jan 24 00:47:49.503977 systemd-journald[1311]: Received client request to flush runtime journal. Jan 24 00:47:49.429698 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:47:49.434066 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:47:49.443350 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:47:49.452977 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:47:49.477929 udevadm[1368]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 24 00:47:49.487320 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jan 24 00:47:49.487349 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Jan 24 00:47:49.493481 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:47:49.501000 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:47:49.506645 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:47:49.580660 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:47:49.597290 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:47:49.610006 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:47:49.639150 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jan 24 00:47:49.639546 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Jan 24 00:47:49.645043 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:47:50.319068 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:47:50.330970 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:47:50.352298 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Jan 24 00:47:50.558840 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:47:50.572951 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:47:50.648271 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 24 00:47:50.679541 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:47:50.762806 kernel: hv_vmbus: registering driver hv_balloon Jan 24 00:47:50.770002 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 24 00:47:50.770086 kernel: hv_vmbus: registering driver hyperv_fb Jan 24 00:47:50.776784 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 24 00:47:50.782809 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 24 00:47:50.788819 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:47:50.788887 kernel: Console: switching to colour dummy device 80x25 Jan 24 00:47:50.796651 kernel: Console: switching to colour frame buffer device 128x48 Jan 24 00:47:50.801343 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:47:50.842467 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:50.987510 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:47:50.987874 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:50.996969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:47:51.073900 systemd-networkd[1395]: lo: Link UP Jan 24 00:47:51.073913 systemd-networkd[1395]: lo: Gained carrier Jan 24 00:47:51.079747 systemd-networkd[1395]: Enumeration completed Jan 24 00:47:51.081273 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:47:51.081286 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:47:51.081711 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:47:51.090924 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:47:51.101779 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1401) Jan 24 00:47:51.135099 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 24 00:47:51.179788 kernel: mlx5_core aebe:00:02.0 enP44734s1: Link up Jan 24 00:47:51.201316 kernel: hv_netvsc 7c1e521d-addc-7c1e-521d-addc7c1e521d eth0: Data path switched to VF: enP44734s1 Jan 24 00:47:51.210562 systemd-networkd[1395]: enP44734s1: Link UP Jan 24 00:47:51.210719 systemd-networkd[1395]: eth0: Link UP Jan 24 00:47:51.210725 systemd-networkd[1395]: eth0: Gained carrier Jan 24 00:47:51.210749 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:47:51.217665 systemd-networkd[1395]: enP44734s1: Gained carrier Jan 24 00:47:51.227785 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 24 00:47:51.244850 systemd-networkd[1395]: eth0: DHCPv4 address 10.200.4.5/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:47:51.345678 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:47:51.352932 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:47:51.452174 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:47:51.482961 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:47:51.487233 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:47:51.494950 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:47:51.502038 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:47:51.528278 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:47:51.532119 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:47:51.536338 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:47:51.539708 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:47:51.539815 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:47:51.542510 systemd[1]: Reached target machines.target - Containers. Jan 24 00:47:51.546081 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:47:51.552933 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:47:51.557250 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:47:51.560186 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:47:51.563005 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:47:51.568931 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:47:51.573283 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:47:51.582411 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:47:51.634098 kernel: loop0: detected capacity change from 0 to 31056 Jan 24 00:47:51.630996 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:47:51.661317 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:47:51.662455 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:47:52.065960 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:47:52.109791 kernel: loop1: detected capacity change from 0 to 224512 Jan 24 00:47:52.172791 kernel: loop2: detected capacity change from 0 to 142488 Jan 24 00:47:52.700793 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:47:52.993899 systemd-networkd[1395]: eth0: Gained IPv6LL Jan 24 00:47:52.998926 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:47:53.180938 kernel: loop4: detected capacity change from 0 to 31056 Jan 24 00:47:53.197802 kernel: loop5: detected capacity change from 0 to 224512 Jan 24 00:47:53.215830 kernel: loop6: detected capacity change from 0 to 142488 Jan 24 00:47:53.235792 kernel: loop7: detected capacity change from 0 to 140768 Jan 24 00:47:53.249283 (sd-merge)[1508]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 24 00:47:53.249911 (sd-merge)[1508]: Merged extensions into '/usr'. Jan 24 00:47:53.253564 systemd[1]: Reloading requested from client PID 1492 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:47:53.253579 systemd[1]: Reloading... Jan 24 00:47:53.315808 zram_generator::config[1533]: No configuration found. Jan 24 00:47:53.474757 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:47:53.560212 systemd[1]: Reloading finished in 306 ms. Jan 24 00:47:53.575150 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:47:53.584915 systemd[1]: Starting ensure-sysext.service... Jan 24 00:47:53.590929 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:47:53.597652 systemd[1]: Reloading requested from client PID 1599 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:47:53.597670 systemd[1]: Reloading... Jan 24 00:47:53.620333 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:47:53.621361 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:47:53.622386 systemd-tmpfiles[1600]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:47:53.622695 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jan 24 00:47:53.622797 systemd-tmpfiles[1600]: ACLs are not supported, ignoring. Jan 24 00:47:53.644319 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:47:53.644331 systemd-tmpfiles[1600]: Skipping /boot Jan 24 00:47:53.682106 systemd-tmpfiles[1600]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:47:53.682122 systemd-tmpfiles[1600]: Skipping /boot Jan 24 00:47:53.732797 zram_generator::config[1632]: No configuration found. Jan 24 00:47:53.874002 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:47:53.949160 systemd[1]: Reloading finished in 350 ms. Jan 24 00:47:53.969100 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:47:53.987319 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:47:53.996934 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:47:54.002959 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:47:54.009928 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:47:54.023123 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:47:54.039259 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:54.039569 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:47:54.042340 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:47:54.049971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:47:54.068090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:47:54.071283 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:47:54.071463 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:54.078597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:47:54.078858 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:47:54.089430 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:47:54.089642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:47:54.093800 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:47:54.094023 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:47:54.116592 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:54.118580 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:47:54.124074 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:47:54.139054 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:47:54.153411 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:47:54.170059 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:47:54.172959 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:47:54.173231 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:47:54.182629 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:47:54.188887 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:47:54.193532 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:47:54.198103 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:47:54.198291 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:47:54.201940 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:47:54.202152 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:47:54.205453 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:47:54.205699 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:47:54.209324 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:47:54.209540 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:47:54.219555 systemd[1]: Finished ensure-sysext.service. Jan 24 00:47:54.229693 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:47:54.229802 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:47:54.230066 augenrules[1740]: No rules Jan 24 00:47:54.231198 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:47:54.238596 systemd-resolved[1704]: Positive Trust Anchors: Jan 24 00:47:54.238609 systemd-resolved[1704]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:47:54.238668 systemd-resolved[1704]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:47:54.258261 systemd-resolved[1704]: Using system hostname 'ci-4081.3.6-n-d923855e69'. Jan 24 00:47:54.260269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:47:54.263673 systemd[1]: Reached target network.target - Network. Jan 24 00:47:54.266091 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:47:54.268870 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:47:54.646100 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:47:54.650304 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:47:57.942622 ldconfig[1489]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:47:57.953748 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:47:57.963981 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:47:57.975570 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:47:57.978838 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:47:57.981941 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:47:57.985380 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:47:57.988833 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:47:57.991679 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:47:57.994976 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:47:57.997936 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:47:57.997969 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:47:58.000317 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:47:58.003572 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:47:58.007901 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:47:58.011559 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:47:58.015800 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:47:58.018350 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:47:58.020640 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:47:58.023156 systemd[1]: System is tainted: cgroupsv1 Jan 24 00:47:58.023222 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:47:58.023260 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:47:58.027414 systemd[1]: Starting chronyd.service - NTP client/server... Jan 24 00:47:58.040883 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:47:58.046944 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:47:58.058986 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:47:58.063225 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:47:58.073517 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:47:58.076338 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:47:58.076407 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 24 00:47:58.082923 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 24 00:47:58.086014 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 24 00:47:58.096467 jq[1770]: false Jan 24 00:47:58.098078 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:47:58.104944 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:47:58.122932 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:47:58.130493 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:47:58.132652 KVP[1774]: KVP starting; pid is:1774 Jan 24 00:47:58.139945 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:47:58.142236 (chronyd)[1765]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 24 00:47:58.152545 extend-filesystems[1771]: Found loop4 Jan 24 00:47:58.157527 extend-filesystems[1771]: Found loop5 Jan 24 00:47:58.157527 extend-filesystems[1771]: Found loop6 Jan 24 00:47:58.157527 extend-filesystems[1771]: Found loop7 Jan 24 00:47:58.157527 extend-filesystems[1771]: Found sda Jan 24 00:47:58.157527 extend-filesystems[1771]: Found sda1 Jan 24 00:47:58.157527 extend-filesystems[1771]: Found sda2 Jan 24 00:47:58.157527 extend-filesystems[1771]: Found sda3 Jan 24 00:47:58.157527 extend-filesystems[1771]: Found usr Jan 24 00:47:58.157527 extend-filesystems[1771]: Found sda4 Jan 24 00:47:58.157527 extend-filesystems[1771]: Found sda6 Jan 24 00:47:58.157527 extend-filesystems[1771]: Found sda7 Jan 24 00:47:58.157527 extend-filesystems[1771]: Found sda9 Jan 24 00:47:58.157527 extend-filesystems[1771]: Checking size of /dev/sda9 Jan 24 00:47:58.208239 kernel: hv_utils: KVP IC version 4.0 Jan 24 00:47:58.174066 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:47:58.160543 KVP[1774]: KVP LIC Version: 3.1 Jan 24 00:47:58.195799 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:47:58.176252 chronyd[1790]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 24 00:47:58.196320 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:47:58.202832 chronyd[1790]: Timezone right/UTC failed leap second check, ignoring Jan 24 00:47:58.209033 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:47:58.203059 chronyd[1790]: Loaded seccomp filter (level 2) Jan 24 00:47:58.217907 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:47:58.234263 systemd[1]: Started chronyd.service - NTP client/server. Jan 24 00:47:58.238993 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:47:58.239299 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:47:58.242491 jq[1801]: true Jan 24 00:47:58.243110 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:47:58.243402 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:47:58.264166 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:47:58.264465 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:47:58.266040 extend-filesystems[1771]: Old size kept for /dev/sda9 Jan 24 00:47:58.272821 extend-filesystems[1771]: Found sr0 Jan 24 00:47:58.270530 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:47:58.287959 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:47:58.296177 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:47:58.333973 dbus-daemon[1769]: [system] SELinux support is enabled Jan 24 00:47:58.341346 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:47:58.349232 (ntainerd)[1821]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:47:58.360414 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:47:58.360468 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:47:58.365084 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:47:58.365116 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:47:58.384336 systemd-logind[1792]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:47:58.389511 systemd-logind[1792]: New seat seat0. Jan 24 00:47:58.392341 jq[1820]: true Jan 24 00:47:58.394916 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:47:58.397638 tar[1807]: linux-amd64/LICENSE Jan 24 00:47:58.397971 tar[1807]: linux-amd64/helm Jan 24 00:47:58.411869 update_engine[1799]: I20260124 00:47:58.411710 1799 main.cc:92] Flatcar Update Engine starting Jan 24 00:47:58.423596 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:47:58.430238 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:47:58.432155 update_engine[1799]: I20260124 00:47:58.431979 1799 update_check_scheduler.cc:74] Next update check in 10m49s Jan 24 00:47:58.439970 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:47:58.522380 coreos-metadata[1768]: Jan 24 00:47:58.522 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 24 00:47:58.529808 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1852) Jan 24 00:47:58.530880 coreos-metadata[1768]: Jan 24 00:47:58.530 INFO Fetch successful Jan 24 00:47:58.531935 coreos-metadata[1768]: Jan 24 00:47:58.531 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 24 00:47:58.538899 coreos-metadata[1768]: Jan 24 00:47:58.538 INFO Fetch successful Jan 24 00:47:58.542241 coreos-metadata[1768]: Jan 24 00:47:58.539 INFO Fetching http://168.63.129.16/machine/d163157a-b9db-4840-b7a1-d0c227069a21/a7fe8071%2D73fc%2D4c37%2D84b8%2Db99b7c8d7e99.%5Fci%2D4081.3.6%2Dn%2Dd923855e69?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 24 00:47:58.542445 coreos-metadata[1768]: Jan 24 00:47:58.542 INFO Fetch successful Jan 24 00:47:58.545945 coreos-metadata[1768]: Jan 24 00:47:58.543 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 24 00:47:58.555791 coreos-metadata[1768]: Jan 24 00:47:58.555 INFO Fetch successful Jan 24 00:47:58.626791 bash[1871]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:47:58.631279 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:47:58.650670 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:47:58.672210 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:47:58.684432 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:47:58.872405 locksmithd[1848]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:47:59.233274 sshd_keygen[1809]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:47:59.295318 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:47:59.308429 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:47:59.313879 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 24 00:47:59.327047 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:47:59.327387 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:47:59.357555 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:47:59.383356 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 24 00:47:59.420564 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:47:59.437149 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:47:59.453125 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:47:59.460025 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:47:59.625590 tar[1807]: linux-amd64/README.md Jan 24 00:47:59.660235 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:47:59.683952 containerd[1821]: time="2026-01-24T00:47:59.683106800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:47:59.719715 containerd[1821]: time="2026-01-24T00:47:59.719664700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:59.726217 containerd[1821]: time="2026-01-24T00:47:59.726016400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:47:59.726217 containerd[1821]: time="2026-01-24T00:47:59.726065500Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:47:59.726217 containerd[1821]: time="2026-01-24T00:47:59.726094900Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:47:59.726385 containerd[1821]: time="2026-01-24T00:47:59.726276900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:47:59.726385 containerd[1821]: time="2026-01-24T00:47:59.726305800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:59.726463 containerd[1821]: time="2026-01-24T00:47:59.726388500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:47:59.726463 containerd[1821]: time="2026-01-24T00:47:59.726412300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:59.726706 containerd[1821]: time="2026-01-24T00:47:59.726674800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:47:59.726789 containerd[1821]: time="2026-01-24T00:47:59.726700800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:59.726789 containerd[1821]: time="2026-01-24T00:47:59.726724700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:47:59.726789 containerd[1821]: time="2026-01-24T00:47:59.726744100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:59.727755 containerd[1821]: time="2026-01-24T00:47:59.727353300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:59.727755 containerd[1821]: time="2026-01-24T00:47:59.727615400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:47:59.728833 containerd[1821]: time="2026-01-24T00:47:59.728257100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:47:59.728833 containerd[1821]: time="2026-01-24T00:47:59.728285900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:47:59.728833 containerd[1821]: time="2026-01-24T00:47:59.728391100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:47:59.728833 containerd[1821]: time="2026-01-24T00:47:59.728444400Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:47:59.749614 containerd[1821]: time="2026-01-24T00:47:59.749324500Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:47:59.749614 containerd[1821]: time="2026-01-24T00:47:59.749386500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:47:59.749614 containerd[1821]: time="2026-01-24T00:47:59.749410100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:47:59.749614 containerd[1821]: time="2026-01-24T00:47:59.749430800Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:47:59.749614 containerd[1821]: time="2026-01-24T00:47:59.749450800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:47:59.749614 containerd[1821]: time="2026-01-24T00:47:59.749604000Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:47:59.750155 containerd[1821]: time="2026-01-24T00:47:59.750122800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750292600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750320000Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750351300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750370700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750389200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750406700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750446700Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750472900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750502900Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750520600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750537900Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750583500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750604400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.751651 containerd[1821]: time="2026-01-24T00:47:59.750622200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750658500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750677800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750696300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750712400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750747300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750783700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750804400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750821600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750838200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750867400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750889200Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750916300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750944600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752181 containerd[1821]: time="2026-01-24T00:47:59.750959700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:47:59.752629 containerd[1821]: time="2026-01-24T00:47:59.751119300Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:47:59.752629 containerd[1821]: time="2026-01-24T00:47:59.751145400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:47:59.752629 containerd[1821]: time="2026-01-24T00:47:59.751161000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:47:59.752629 containerd[1821]: time="2026-01-24T00:47:59.751192700Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:47:59.752629 containerd[1821]: time="2026-01-24T00:47:59.751207800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752629 containerd[1821]: time="2026-01-24T00:47:59.751234500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:47:59.752629 containerd[1821]: time="2026-01-24T00:47:59.751263200Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:47:59.752629 containerd[1821]: time="2026-01-24T00:47:59.751277700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:47:59.752979 containerd[1821]: time="2026-01-24T00:47:59.751751800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:47:59.752979 containerd[1821]: time="2026-01-24T00:47:59.751864200Z" level=info msg="Connect containerd service" Jan 24 00:47:59.752979 containerd[1821]: time="2026-01-24T00:47:59.751933300Z" level=info msg="using legacy CRI server" Jan 24 00:47:59.752979 containerd[1821]: time="2026-01-24T00:47:59.751944700Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:47:59.752979 containerd[1821]: time="2026-01-24T00:47:59.752098300Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:47:59.753298 containerd[1821]: time="2026-01-24T00:47:59.753140300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:47:59.756504 containerd[1821]: time="2026-01-24T00:47:59.753407400Z" level=info msg="Start subscribing containerd event" Jan 24 00:47:59.756504 containerd[1821]: time="2026-01-24T00:47:59.753513000Z" level=info msg="Start recovering state" Jan 24 00:47:59.756504 containerd[1821]: time="2026-01-24T00:47:59.753583900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:47:59.756504 containerd[1821]: time="2026-01-24T00:47:59.753587600Z" level=info msg="Start event monitor" Jan 24 00:47:59.756504 containerd[1821]: time="2026-01-24T00:47:59.753624500Z" level=info msg="Start snapshots syncer" Jan 24 00:47:59.756504 containerd[1821]: time="2026-01-24T00:47:59.753638900Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:47:59.756504 containerd[1821]: time="2026-01-24T00:47:59.753649800Z" level=info msg="Start streaming server" Jan 24 00:47:59.756504 containerd[1821]: time="2026-01-24T00:47:59.753663600Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:47:59.753874 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:47:59.761850 containerd[1821]: time="2026-01-24T00:47:59.761826000Z" level=info msg="containerd successfully booted in 0.079849s" Jan 24 00:48:00.035950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:00.041297 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:48:00.041582 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:48:00.045910 systemd[1]: Startup finished in 933ms (firmware) + 17.243s (loader) + 13.202s (kernel) + 14.010s (userspace) = 45.390s. Jan 24 00:48:00.443500 login[1937]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 00:48:00.445456 login[1938]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 00:48:00.460047 systemd-logind[1792]: New session 1 of user core. Jan 24 00:48:00.462524 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:48:00.470284 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:48:00.476327 systemd-logind[1792]: New session 2 of user core. Jan 24 00:48:00.515907 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:48:00.526829 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:48:00.549977 (systemd)[1970]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:48:00.703913 kubelet[1957]: E0124 00:48:00.703799 1957 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:48:00.707112 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:48:00.707460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:48:00.744807 systemd[1970]: Queued start job for default target default.target. Jan 24 00:48:00.745403 systemd[1970]: Created slice app.slice - User Application Slice. Jan 24 00:48:00.745436 systemd[1970]: Reached target paths.target - Paths. Jan 24 00:48:00.745453 systemd[1970]: Reached target timers.target - Timers. Jan 24 00:48:00.754859 systemd[1970]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:48:00.761463 systemd[1970]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:48:00.761666 systemd[1970]: Reached target sockets.target - Sockets. Jan 24 00:48:00.762809 systemd[1970]: Reached target basic.target - Basic System. Jan 24 00:48:00.762886 systemd[1970]: Reached target default.target - Main User Target. Jan 24 00:48:00.762921 systemd[1970]: Startup finished in 206ms. Jan 24 00:48:00.763058 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:48:00.768113 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:48:00.770668 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:48:01.492125 waagent[1932]: 2026-01-24T00:48:01.492022Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.492487Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.493463Z INFO Daemon Daemon Python: 3.11.9 Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.494629Z INFO Daemon Daemon Run daemon Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.495548Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.496355Z INFO Daemon Daemon Using waagent for provisioning Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.497334Z INFO Daemon Daemon Activate resource disk Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.497645Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.502194Z INFO Daemon Daemon Found device: None Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.503091Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.503967Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.505895Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 24 00:48:01.526795 waagent[1932]: 2026-01-24T00:48:01.506444Z INFO Daemon Daemon Running default provisioning handler Jan 24 00:48:01.529980 waagent[1932]: 2026-01-24T00:48:01.529909Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 24 00:48:01.536255 waagent[1932]: 2026-01-24T00:48:01.536201Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 24 00:48:01.544813 waagent[1932]: 2026-01-24T00:48:01.536402Z INFO Daemon Daemon cloud-init is enabled: False Jan 24 00:48:01.544813 waagent[1932]: 2026-01-24T00:48:01.537206Z INFO Daemon Daemon Copying ovf-env.xml Jan 24 00:48:01.640785 waagent[1932]: 2026-01-24T00:48:01.637121Z INFO Daemon Daemon Successfully mounted dvd Jan 24 00:48:01.652788 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 24 00:48:01.663890 waagent[1932]: 2026-01-24T00:48:01.654057Z INFO Daemon Daemon Detect protocol endpoint Jan 24 00:48:01.663890 waagent[1932]: 2026-01-24T00:48:01.654337Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 24 00:48:01.663890 waagent[1932]: 2026-01-24T00:48:01.655361Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 24 00:48:01.663890 waagent[1932]: 2026-01-24T00:48:01.656200Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 24 00:48:01.663890 waagent[1932]: 2026-01-24T00:48:01.657184Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 24 00:48:01.663890 waagent[1932]: 2026-01-24T00:48:01.657496Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 24 00:48:01.687347 waagent[1932]: 2026-01-24T00:48:01.687284Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 24 00:48:01.695039 waagent[1932]: 2026-01-24T00:48:01.687850Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 24 00:48:01.695039 waagent[1932]: 2026-01-24T00:48:01.688459Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 24 00:48:01.771199 waagent[1932]: 2026-01-24T00:48:01.771039Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 24 00:48:01.774415 waagent[1932]: 2026-01-24T00:48:01.774312Z INFO Daemon Daemon Forcing an update of the goal state. Jan 24 00:48:01.848191 waagent[1932]: 2026-01-24T00:48:01.848104Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 24 00:48:01.860578 waagent[1932]: 2026-01-24T00:48:01.860515Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Jan 24 00:48:01.878488 waagent[1932]: 2026-01-24T00:48:01.861251Z INFO Daemon Jan 24 00:48:01.878488 waagent[1932]: 2026-01-24T00:48:01.861813Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 49311b1f-9035-407f-8bd3-fcda994b8706 eTag: 2354450011202648558 source: Fabric] Jan 24 00:48:01.878488 waagent[1932]: 2026-01-24T00:48:01.862880Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 24 00:48:01.878488 waagent[1932]: 2026-01-24T00:48:01.863941Z INFO Daemon Jan 24 00:48:01.878488 waagent[1932]: 2026-01-24T00:48:01.865024Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 24 00:48:01.878488 waagent[1932]: 2026-01-24T00:48:01.869305Z INFO Daemon Daemon Downloading artifacts profile blob Jan 24 00:48:02.014699 waagent[1932]: 2026-01-24T00:48:02.014611Z INFO Daemon Downloaded certificate {'thumbprint': '0F6EFF8559C5899B27B07E7EECBAB077723E9FA6', 'hasPrivateKey': True} Jan 24 00:48:02.019789 waagent[1932]: 2026-01-24T00:48:02.019709Z INFO Daemon Fetch goal state completed Jan 24 00:48:02.064113 waagent[1932]: 2026-01-24T00:48:02.063966Z INFO Daemon Daemon Starting provisioning Jan 24 00:48:02.066740 waagent[1932]: 2026-01-24T00:48:02.066662Z INFO Daemon Daemon Handle ovf-env.xml. Jan 24 00:48:02.069481 waagent[1932]: 2026-01-24T00:48:02.069416Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-d923855e69] Jan 24 00:48:02.105663 waagent[1932]: 2026-01-24T00:48:02.105572Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-d923855e69] Jan 24 00:48:02.113105 waagent[1932]: 2026-01-24T00:48:02.106135Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 24 00:48:02.113105 waagent[1932]: 2026-01-24T00:48:02.107006Z INFO Daemon Daemon Primary interface is [eth0] Jan 24 00:48:02.135038 systemd-networkd[1395]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:48:02.135046 systemd-networkd[1395]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:48:02.135093 systemd-networkd[1395]: eth0: DHCP lease lost Jan 24 00:48:02.136277 waagent[1932]: 2026-01-24T00:48:02.136201Z INFO Daemon Daemon Create user account if not exists Jan 24 00:48:02.148782 waagent[1932]: 2026-01-24T00:48:02.136549Z INFO Daemon Daemon User core already exists, skip useradd Jan 24 00:48:02.148782 waagent[1932]: 2026-01-24T00:48:02.137497Z INFO Daemon Daemon Configure sudoer Jan 24 00:48:02.148782 waagent[1932]: 2026-01-24T00:48:02.138672Z INFO Daemon Daemon Configure sshd Jan 24 00:48:02.148782 waagent[1932]: 2026-01-24T00:48:02.139477Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 24 00:48:02.148782 waagent[1932]: 2026-01-24T00:48:02.140136Z INFO Daemon Daemon Deploy ssh public key. Jan 24 00:48:02.153895 systemd-networkd[1395]: eth0: DHCPv6 lease lost Jan 24 00:48:02.183846 systemd-networkd[1395]: eth0: DHCPv4 address 10.200.4.5/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:48:03.231565 waagent[1932]: 2026-01-24T00:48:03.231477Z INFO Daemon Daemon Provisioning complete Jan 24 00:48:03.242519 waagent[1932]: 2026-01-24T00:48:03.242462Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 24 00:48:03.249433 waagent[1932]: 2026-01-24T00:48:03.242832Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 24 00:48:03.249433 waagent[1932]: 2026-01-24T00:48:03.243703Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 24 00:48:03.367798 waagent[2027]: 2026-01-24T00:48:03.367696Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 24 00:48:03.368287 waagent[2027]: 2026-01-24T00:48:03.367874Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 24 00:48:03.368287 waagent[2027]: 2026-01-24T00:48:03.367973Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 24 00:48:03.409487 waagent[2027]: 2026-01-24T00:48:03.409386Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 24 00:48:03.409739 waagent[2027]: 2026-01-24T00:48:03.409683Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 24 00:48:03.409878 waagent[2027]: 2026-01-24T00:48:03.409822Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 24 00:48:03.417521 waagent[2027]: 2026-01-24T00:48:03.417454Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 24 00:48:03.422299 waagent[2027]: 2026-01-24T00:48:03.422249Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Jan 24 00:48:03.422727 waagent[2027]: 2026-01-24T00:48:03.422676Z INFO ExtHandler Jan 24 00:48:03.422827 waagent[2027]: 2026-01-24T00:48:03.422781Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e8cdaf1c-acda-429d-a529-a1b21a769b1a eTag: 2354450011202648558 source: Fabric] Jan 24 00:48:03.423147 waagent[2027]: 2026-01-24T00:48:03.423094Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 24 00:48:03.423690 waagent[2027]: 2026-01-24T00:48:03.423634Z INFO ExtHandler Jan 24 00:48:03.423754 waagent[2027]: 2026-01-24T00:48:03.423716Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 24 00:48:03.426778 waagent[2027]: 2026-01-24T00:48:03.426722Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 24 00:48:03.488709 waagent[2027]: 2026-01-24T00:48:03.488583Z INFO ExtHandler Downloaded certificate {'thumbprint': '0F6EFF8559C5899B27B07E7EECBAB077723E9FA6', 'hasPrivateKey': True} Jan 24 00:48:03.489191 waagent[2027]: 2026-01-24T00:48:03.489132Z INFO ExtHandler Fetch goal state completed Jan 24 00:48:03.500971 waagent[2027]: 2026-01-24T00:48:03.500912Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2027 Jan 24 00:48:03.501126 waagent[2027]: 2026-01-24T00:48:03.501076Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 24 00:48:03.502647 waagent[2027]: 2026-01-24T00:48:03.502590Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 24 00:48:03.503020 waagent[2027]: 2026-01-24T00:48:03.502969Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 24 00:48:03.559458 waagent[2027]: 2026-01-24T00:48:03.559402Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 24 00:48:03.559721 waagent[2027]: 2026-01-24T00:48:03.559666Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 24 00:48:03.567531 waagent[2027]: 2026-01-24T00:48:03.567431Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 24 00:48:03.574913 systemd[1]: Reloading requested from client PID 2040 ('systemctl') (unit waagent.service)... Jan 24 00:48:03.574931 systemd[1]: Reloading... Jan 24 00:48:03.669824 zram_generator::config[2080]: No configuration found. Jan 24 00:48:03.788321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:48:03.868317 systemd[1]: Reloading finished in 292 ms. Jan 24 00:48:03.894939 waagent[2027]: 2026-01-24T00:48:03.893983Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 24 00:48:03.902472 systemd[1]: Reloading requested from client PID 2136 ('systemctl') (unit waagent.service)... Jan 24 00:48:03.902487 systemd[1]: Reloading... Jan 24 00:48:03.984789 zram_generator::config[2169]: No configuration found. Jan 24 00:48:04.116289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:48:04.197098 systemd[1]: Reloading finished in 294 ms. Jan 24 00:48:04.224196 waagent[2027]: 2026-01-24T00:48:04.223111Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 24 00:48:04.224196 waagent[2027]: 2026-01-24T00:48:04.223321Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 24 00:48:04.543152 waagent[2027]: 2026-01-24T00:48:04.542975Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 24 00:48:04.543925 waagent[2027]: 2026-01-24T00:48:04.543851Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 24 00:48:04.544841 waagent[2027]: 2026-01-24T00:48:04.544748Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 24 00:48:04.545011 waagent[2027]: 2026-01-24T00:48:04.544939Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 24 00:48:04.545804 waagent[2027]: 2026-01-24T00:48:04.545642Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 24 00:48:04.545804 waagent[2027]: 2026-01-24T00:48:04.545707Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 24 00:48:04.545941 waagent[2027]: 2026-01-24T00:48:04.545783Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 24 00:48:04.546549 waagent[2027]: 2026-01-24T00:48:04.546250Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 24 00:48:04.546549 waagent[2027]: 2026-01-24T00:48:04.546475Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 24 00:48:04.546738 waagent[2027]: 2026-01-24T00:48:04.546601Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 24 00:48:04.546897 waagent[2027]: 2026-01-24T00:48:04.546824Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 24 00:48:04.547492 waagent[2027]: 2026-01-24T00:48:04.547434Z INFO EnvHandler ExtHandler Configure routes Jan 24 00:48:04.547634 waagent[2027]: 2026-01-24T00:48:04.547576Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 24 00:48:04.547828 waagent[2027]: 2026-01-24T00:48:04.547728Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 24 00:48:04.548054 waagent[2027]: 2026-01-24T00:48:04.548005Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 24 00:48:04.548190 waagent[2027]: 2026-01-24T00:48:04.548120Z INFO EnvHandler ExtHandler Gateway:None Jan 24 00:48:04.548759 waagent[2027]: 2026-01-24T00:48:04.548694Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 24 00:48:04.548759 waagent[2027]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 24 00:48:04.548759 waagent[2027]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 24 00:48:04.548759 waagent[2027]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 24 00:48:04.548759 waagent[2027]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 24 00:48:04.548759 waagent[2027]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 24 00:48:04.548759 waagent[2027]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 24 00:48:04.549112 waagent[2027]: 2026-01-24T00:48:04.548855Z INFO EnvHandler ExtHandler Routes:None Jan 24 00:48:04.612072 waagent[2027]: 2026-01-24T00:48:04.611978Z INFO ExtHandler ExtHandler Jan 24 00:48:04.613377 waagent[2027]: 2026-01-24T00:48:04.612340Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1c70e1f0-6ca3-48ca-bc9f-dc090df6cf3d correlation 7356c604-fcc6-4a4e-ac67-6471f6f1b51d created: 2026-01-24T00:47:02.476723Z] Jan 24 00:48:04.613377 waagent[2027]: 2026-01-24T00:48:04.612904Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 24 00:48:04.613903 waagent[2027]: 2026-01-24T00:48:04.613849Z INFO MonitorHandler ExtHandler Network interfaces: Jan 24 00:48:04.613903 waagent[2027]: Executing ['ip', '-a', '-o', 'link']: Jan 24 00:48:04.613903 waagent[2027]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 24 00:48:04.613903 waagent[2027]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1d:ad:dc brd ff:ff:ff:ff:ff:ff Jan 24 00:48:04.613903 waagent[2027]: 3: enP44734s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1d:ad:dc brd ff:ff:ff:ff:ff:ff\ altname enP44734p0s2 Jan 24 00:48:04.613903 waagent[2027]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 24 00:48:04.613903 waagent[2027]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 24 00:48:04.613903 waagent[2027]: 2: eth0 inet 10.200.4.5/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 24 00:48:04.613903 waagent[2027]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 24 00:48:04.613903 waagent[2027]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 24 00:48:04.613903 waagent[2027]: 2: eth0 inet6 fe80::7e1e:52ff:fe1d:addc/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 24 00:48:04.614334 waagent[2027]: 2026-01-24T00:48:04.613983Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 24 00:48:04.648222 waagent[2027]: 2026-01-24T00:48:04.648062Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: A60F2F6B-DB64-43B8-B6E8-18356A41AF3E;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 24 00:48:04.656686 waagent[2027]: 2026-01-24T00:48:04.656623Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 24 00:48:04.656686 waagent[2027]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:48:04.656686 waagent[2027]: pkts bytes target prot opt in out source destination Jan 24 00:48:04.656686 waagent[2027]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:48:04.656686 waagent[2027]: pkts bytes target prot opt in out source destination Jan 24 00:48:04.656686 waagent[2027]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:48:04.656686 waagent[2027]: pkts bytes target prot opt in out source destination Jan 24 00:48:04.656686 waagent[2027]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 24 00:48:04.656686 waagent[2027]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 24 00:48:04.656686 waagent[2027]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 24 00:48:04.660004 waagent[2027]: 2026-01-24T00:48:04.659947Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 24 00:48:04.660004 waagent[2027]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:48:04.660004 waagent[2027]: pkts bytes target prot opt in out source destination Jan 24 00:48:04.660004 waagent[2027]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:48:04.660004 waagent[2027]: pkts bytes target prot opt in out source destination Jan 24 00:48:04.660004 waagent[2027]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:48:04.660004 waagent[2027]: pkts bytes target prot opt in out source destination Jan 24 00:48:04.660004 waagent[2027]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 24 00:48:04.660004 waagent[2027]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 24 00:48:04.660004 waagent[2027]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 24 00:48:04.660392 waagent[2027]: 2026-01-24T00:48:04.660244Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 24 00:48:10.717044 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:48:10.729978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:10.842949 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:10.853130 (kubelet)[2275]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:48:11.526126 kubelet[2275]: E0124 00:48:11.526048 2275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:48:11.529932 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:48:11.530275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:48:18.855276 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:48:18.861046 systemd[1]: Started sshd@0-10.200.4.5:22-10.200.16.10:34276.service - OpenSSH per-connection server daemon (10.200.16.10:34276). Jan 24 00:48:19.538433 sshd[2283]: Accepted publickey for core from 10.200.16.10 port 34276 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:19.540244 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:19.544492 systemd-logind[1792]: New session 3 of user core. Jan 24 00:48:19.553090 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:48:20.064109 systemd[1]: Started sshd@1-10.200.4.5:22-10.200.16.10:37714.service - OpenSSH per-connection server daemon (10.200.16.10:37714). Jan 24 00:48:20.662673 sshd[2288]: Accepted publickey for core from 10.200.16.10 port 37714 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:20.664433 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:20.670273 systemd-logind[1792]: New session 4 of user core. Jan 24 00:48:20.675076 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:48:21.091315 sshd[2288]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:21.095741 systemd[1]: sshd@1-10.200.4.5:22-10.200.16.10:37714.service: Deactivated successfully. Jan 24 00:48:21.100138 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:48:21.100974 systemd-logind[1792]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:48:21.101964 systemd-logind[1792]: Removed session 4. Jan 24 00:48:21.200069 systemd[1]: Started sshd@2-10.200.4.5:22-10.200.16.10:37716.service - OpenSSH per-connection server daemon (10.200.16.10:37716). Jan 24 00:48:21.689647 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:48:21.701990 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:21.796881 sshd[2296]: Accepted publickey for core from 10.200.16.10 port 37716 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:21.798187 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:21.809017 systemd-logind[1792]: New session 5 of user core. Jan 24 00:48:21.814089 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:48:21.826953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:21.827216 (kubelet)[2311]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:48:22.000059 chronyd[1790]: Selected source PHC0 Jan 24 00:48:22.219063 sshd[2296]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:22.223047 systemd[1]: sshd@2-10.200.4.5:22-10.200.16.10:37716.service: Deactivated successfully. Jan 24 00:48:22.227124 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:48:22.227876 systemd-logind[1792]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:48:22.228736 systemd-logind[1792]: Removed session 5. Jan 24 00:48:22.322464 systemd[1]: Started sshd@3-10.200.4.5:22-10.200.16.10:37728.service - OpenSSH per-connection server daemon (10.200.16.10:37728). Jan 24 00:48:22.437311 kubelet[2311]: E0124 00:48:22.437230 2311 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:48:22.439946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:48:22.440294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:48:22.921439 sshd[2321]: Accepted publickey for core from 10.200.16.10 port 37728 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:22.923239 sshd[2321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:22.929477 systemd-logind[1792]: New session 6 of user core. Jan 24 00:48:22.939095 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:48:23.349819 sshd[2321]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:23.354532 systemd[1]: sshd@3-10.200.4.5:22-10.200.16.10:37728.service: Deactivated successfully. Jan 24 00:48:23.359462 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:48:23.360228 systemd-logind[1792]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:48:23.361115 systemd-logind[1792]: Removed session 6. Jan 24 00:48:23.460337 systemd[1]: Started sshd@4-10.200.4.5:22-10.200.16.10:37744.service - OpenSSH per-connection server daemon (10.200.16.10:37744). Jan 24 00:48:24.059276 sshd[2333]: Accepted publickey for core from 10.200.16.10 port 37744 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:24.061068 sshd[2333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:24.066658 systemd-logind[1792]: New session 7 of user core. Jan 24 00:48:24.072087 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:48:24.519146 sudo[2337]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:48:24.519518 sudo[2337]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:48:24.550214 sudo[2337]: pam_unix(sudo:session): session closed for user root Jan 24 00:48:24.647437 sshd[2333]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:24.652273 systemd[1]: sshd@4-10.200.4.5:22-10.200.16.10:37744.service: Deactivated successfully. Jan 24 00:48:24.656453 systemd-logind[1792]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:48:24.657452 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:48:24.658344 systemd-logind[1792]: Removed session 7. Jan 24 00:48:24.755086 systemd[1]: Started sshd@5-10.200.4.5:22-10.200.16.10:37756.service - OpenSSH per-connection server daemon (10.200.16.10:37756). Jan 24 00:48:25.349744 sshd[2342]: Accepted publickey for core from 10.200.16.10 port 37756 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:25.351303 sshd[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:25.356189 systemd-logind[1792]: New session 8 of user core. Jan 24 00:48:25.362082 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:48:25.682733 sudo[2347]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:48:25.683261 sudo[2347]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:48:25.686664 sudo[2347]: pam_unix(sudo:session): session closed for user root Jan 24 00:48:25.691698 sudo[2346]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:48:25.692067 sudo[2346]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:48:25.710151 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:48:25.711833 auditctl[2350]: No rules Jan 24 00:48:25.713220 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:48:25.713728 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:48:25.717436 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:48:25.743663 augenrules[2369]: No rules Jan 24 00:48:25.745392 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:48:25.747829 sudo[2346]: pam_unix(sudo:session): session closed for user root Jan 24 00:48:25.844970 sshd[2342]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:25.849516 systemd[1]: sshd@5-10.200.4.5:22-10.200.16.10:37756.service: Deactivated successfully. Jan 24 00:48:25.853658 systemd-logind[1792]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:48:25.854057 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:48:25.855248 systemd-logind[1792]: Removed session 8. Jan 24 00:48:25.949501 systemd[1]: Started sshd@6-10.200.4.5:22-10.200.16.10:37758.service - OpenSSH per-connection server daemon (10.200.16.10:37758). Jan 24 00:48:26.544995 sshd[2378]: Accepted publickey for core from 10.200.16.10 port 37758 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:26.546755 sshd[2378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:26.552690 systemd-logind[1792]: New session 9 of user core. Jan 24 00:48:26.558105 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:48:26.877072 sudo[2382]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:48:26.877535 sudo[2382]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:48:28.203274 (dockerd)[2398]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:48:28.203275 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:48:29.653592 dockerd[2398]: time="2026-01-24T00:48:29.653527491Z" level=info msg="Starting up" Jan 24 00:48:30.577614 dockerd[2398]: time="2026-01-24T00:48:30.577566764Z" level=info msg="Loading containers: start." Jan 24 00:48:30.709934 kernel: Initializing XFRM netlink socket Jan 24 00:48:30.827420 systemd-networkd[1395]: docker0: Link UP Jan 24 00:48:30.850018 dockerd[2398]: time="2026-01-24T00:48:30.849976312Z" level=info msg="Loading containers: done." Jan 24 00:48:30.925360 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck712829413-merged.mount: Deactivated successfully. Jan 24 00:48:30.932991 dockerd[2398]: time="2026-01-24T00:48:30.932939806Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:48:30.933117 dockerd[2398]: time="2026-01-24T00:48:30.933083221Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:48:30.933241 dockerd[2398]: time="2026-01-24T00:48:30.933211435Z" level=info msg="Daemon has completed initialization" Jan 24 00:48:30.994168 dockerd[2398]: time="2026-01-24T00:48:30.992975498Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:48:30.993538 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:48:31.929016 containerd[1821]: time="2026-01-24T00:48:31.928974721Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 24 00:48:32.466753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 00:48:32.475420 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:32.596986 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:32.601501 (kubelet)[2544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:48:32.639258 kubelet[2544]: E0124 00:48:32.639165 2544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:48:32.641611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:48:32.641929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:48:33.362110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3818556399.mount: Deactivated successfully. Jan 24 00:48:35.008206 containerd[1821]: time="2026-01-24T00:48:35.008141139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:35.010934 containerd[1821]: time="2026-01-24T00:48:35.010729060Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 24 00:48:35.014071 containerd[1821]: time="2026-01-24T00:48:35.014009588Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:35.019584 containerd[1821]: time="2026-01-24T00:48:35.019527534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:35.020613 containerd[1821]: time="2026-01-24T00:48:35.020576643Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 3.091559222s" Jan 24 00:48:35.020906 containerd[1821]: time="2026-01-24T00:48:35.020737844Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 24 00:48:35.021820 containerd[1821]: time="2026-01-24T00:48:35.021796053Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 24 00:48:36.524522 containerd[1821]: time="2026-01-24T00:48:36.524468952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:36.527162 containerd[1821]: time="2026-01-24T00:48:36.527097474Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 24 00:48:36.530320 containerd[1821]: time="2026-01-24T00:48:36.530244201Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:36.535103 containerd[1821]: time="2026-01-24T00:48:36.535049841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:36.537966 containerd[1821]: time="2026-01-24T00:48:36.537923165Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.516092312s" Jan 24 00:48:36.538055 containerd[1821]: time="2026-01-24T00:48:36.537970566Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 24 00:48:36.540491 containerd[1821]: time="2026-01-24T00:48:36.540260285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 24 00:48:37.929040 containerd[1821]: time="2026-01-24T00:48:37.928985529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:37.932380 containerd[1821]: time="2026-01-24T00:48:37.932112455Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 24 00:48:37.936649 containerd[1821]: time="2026-01-24T00:48:37.935416983Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:37.940423 containerd[1821]: time="2026-01-24T00:48:37.940388024Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:37.941421 containerd[1821]: time="2026-01-24T00:48:37.941383533Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.401086547s" Jan 24 00:48:37.941505 containerd[1821]: time="2026-01-24T00:48:37.941426933Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 24 00:48:37.942446 containerd[1821]: time="2026-01-24T00:48:37.942414741Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 24 00:48:38.866789 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 24 00:48:39.165292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3449452764.mount: Deactivated successfully. Jan 24 00:48:39.703366 containerd[1821]: time="2026-01-24T00:48:39.703311884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:39.706440 containerd[1821]: time="2026-01-24T00:48:39.706298126Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 24 00:48:39.709532 containerd[1821]: time="2026-01-24T00:48:39.709472070Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:39.713836 containerd[1821]: time="2026-01-24T00:48:39.713784930Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:39.714527 containerd[1821]: time="2026-01-24T00:48:39.714490839Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.772041698s" Jan 24 00:48:39.714617 containerd[1821]: time="2026-01-24T00:48:39.714525540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 24 00:48:39.715237 containerd[1821]: time="2026-01-24T00:48:39.715210850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 24 00:48:40.295147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859370403.mount: Deactivated successfully. Jan 24 00:48:41.626111 containerd[1821]: time="2026-01-24T00:48:41.626057720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:41.629156 containerd[1821]: time="2026-01-24T00:48:41.629099262Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 24 00:48:41.632422 containerd[1821]: time="2026-01-24T00:48:41.632375408Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:41.670737 containerd[1821]: time="2026-01-24T00:48:41.670515938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:41.672785 containerd[1821]: time="2026-01-24T00:48:41.671699554Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.956455504s" Jan 24 00:48:41.672785 containerd[1821]: time="2026-01-24T00:48:41.671749455Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 24 00:48:41.672785 containerd[1821]: time="2026-01-24T00:48:41.672439965Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 24 00:48:42.180518 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2834232786.mount: Deactivated successfully. Jan 24 00:48:42.205354 containerd[1821]: time="2026-01-24T00:48:42.205305974Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:42.208027 containerd[1821]: time="2026-01-24T00:48:42.207845309Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 24 00:48:42.211338 containerd[1821]: time="2026-01-24T00:48:42.211280657Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:42.216110 containerd[1821]: time="2026-01-24T00:48:42.216033823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:42.217415 containerd[1821]: time="2026-01-24T00:48:42.216745133Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 544.272368ms" Jan 24 00:48:42.217415 containerd[1821]: time="2026-01-24T00:48:42.216801334Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 24 00:48:42.217415 containerd[1821]: time="2026-01-24T00:48:42.217325341Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 24 00:48:42.716887 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 00:48:42.722306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:42.869997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:42.874585 (kubelet)[2694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:48:43.495956 kubelet[2694]: E0124 00:48:43.495899 2694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:48:43.498443 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:48:43.498792 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:48:43.562346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970516845.mount: Deactivated successfully. Jan 24 00:48:43.846815 update_engine[1799]: I20260124 00:48:43.846084 1799 update_attempter.cc:509] Updating boot flags... Jan 24 00:48:43.936123 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2726) Jan 24 00:48:44.152786 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2727) Jan 24 00:48:44.332818 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2727) Jan 24 00:48:46.108932 containerd[1821]: time="2026-01-24T00:48:46.108876018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:46.115932 containerd[1821]: time="2026-01-24T00:48:46.115867944Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 24 00:48:46.119716 containerd[1821]: time="2026-01-24T00:48:46.119660712Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:46.125522 containerd[1821]: time="2026-01-24T00:48:46.125467617Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:48:46.126800 containerd[1821]: time="2026-01-24T00:48:46.126586237Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.909227395s" Jan 24 00:48:46.126800 containerd[1821]: time="2026-01-24T00:48:46.126627538Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 24 00:48:49.038091 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:49.050065 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:49.103056 systemd[1]: Reloading requested from client PID 2877 ('systemctl') (unit session-9.scope)... Jan 24 00:48:49.103077 systemd[1]: Reloading... Jan 24 00:48:49.234896 zram_generator::config[2918]: No configuration found. Jan 24 00:48:49.366009 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:48:49.446530 systemd[1]: Reloading finished in 342 ms. Jan 24 00:48:49.504276 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:48:49.504393 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:48:49.504925 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:49.509041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:49.861948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:49.862203 (kubelet)[2999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:48:49.900328 kubelet[2999]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:48:49.900328 kubelet[2999]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:48:49.900328 kubelet[2999]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:48:49.900883 kubelet[2999]: I0124 00:48:49.900409 2999 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:48:50.216861 kubelet[2999]: I0124 00:48:50.216823 2999 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:48:50.216861 kubelet[2999]: I0124 00:48:50.216855 2999 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:48:50.217244 kubelet[2999]: I0124 00:48:50.217218 2999 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:48:50.688095 kubelet[2999]: E0124 00:48:50.687929 2999 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:48:50.689499 kubelet[2999]: I0124 00:48:50.689285 2999 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:48:50.701386 kubelet[2999]: E0124 00:48:50.701353 2999 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:48:50.701386 kubelet[2999]: I0124 00:48:50.701382 2999 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:48:50.705486 kubelet[2999]: I0124 00:48:50.705087 2999 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:48:50.706360 kubelet[2999]: I0124 00:48:50.706309 2999 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:48:50.706543 kubelet[2999]: I0124 00:48:50.706356 2999 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-d923855e69","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 00:48:50.706685 kubelet[2999]: I0124 00:48:50.706556 2999 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:48:50.706685 kubelet[2999]: I0124 00:48:50.706570 2999 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:48:50.706792 kubelet[2999]: I0124 00:48:50.706716 2999 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:48:50.710331 kubelet[2999]: I0124 00:48:50.710309 2999 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:48:50.710430 kubelet[2999]: I0124 00:48:50.710345 2999 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:48:50.710430 kubelet[2999]: I0124 00:48:50.710373 2999 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:48:50.710430 kubelet[2999]: I0124 00:48:50.710387 2999 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:48:50.718996 kubelet[2999]: W0124 00:48:50.718948 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-d923855e69&limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jan 24 00:48:50.719089 kubelet[2999]: E0124 00:48:50.719006 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-d923855e69&limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:48:50.719138 kubelet[2999]: W0124 00:48:50.719107 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jan 24 00:48:50.719180 kubelet[2999]: E0124 00:48:50.719151 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:48:50.720113 kubelet[2999]: I0124 00:48:50.720089 2999 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:48:50.720495 kubelet[2999]: I0124 00:48:50.720472 2999 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:48:50.721165 kubelet[2999]: W0124 00:48:50.721143 2999 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:48:50.723259 kubelet[2999]: I0124 00:48:50.723238 2999 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:48:50.723337 kubelet[2999]: I0124 00:48:50.723283 2999 server.go:1287] "Started kubelet" Jan 24 00:48:50.726794 kubelet[2999]: I0124 00:48:50.725314 2999 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:48:50.726794 kubelet[2999]: I0124 00:48:50.725709 2999 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:48:50.726794 kubelet[2999]: I0124 00:48:50.725794 2999 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:48:50.726794 kubelet[2999]: I0124 00:48:50.726238 2999 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:48:50.727095 kubelet[2999]: I0124 00:48:50.727079 2999 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:48:50.733485 kubelet[2999]: I0124 00:48:50.733458 2999 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:48:50.733955 kubelet[2999]: I0124 00:48:50.733935 2999 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:48:50.734285 kubelet[2999]: E0124 00:48:50.734266 2999 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-d923855e69\" not found" Jan 24 00:48:50.736572 kubelet[2999]: E0124 00:48:50.736538 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d923855e69?timeout=10s\": dial tcp 10.200.4.5:6443: connect: connection refused" interval="200ms" Jan 24 00:48:50.737662 kubelet[2999]: I0124 00:48:50.737647 2999 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:48:50.737853 kubelet[2999]: I0124 00:48:50.737842 2999 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:48:50.737961 kubelet[2999]: E0124 00:48:50.736618 2999 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.5:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-d923855e69.188d845558caea05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-d923855e69,UID:ci-4081.3.6-n-d923855e69,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-d923855e69,},FirstTimestamp:2026-01-24 00:48:50.723252741 +0000 UTC m=+0.856217806,LastTimestamp:2026-01-24 00:48:50.723252741 +0000 UTC m=+0.856217806,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-d923855e69,}" Jan 24 00:48:50.738166 kubelet[2999]: I0124 00:48:50.738141 2999 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:48:50.738255 kubelet[2999]: I0124 00:48:50.738227 2999 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:48:50.740572 kubelet[2999]: I0124 00:48:50.740546 2999 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:48:50.747837 kubelet[2999]: W0124 00:48:50.746756 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jan 24 00:48:50.747837 kubelet[2999]: E0124 00:48:50.746848 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:48:50.763873 kubelet[2999]: I0124 00:48:50.763841 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:48:50.769360 kubelet[2999]: I0124 00:48:50.769338 2999 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:48:50.769475 kubelet[2999]: I0124 00:48:50.769465 2999 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:48:50.769560 kubelet[2999]: I0124 00:48:50.769546 2999 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:48:50.769625 kubelet[2999]: I0124 00:48:50.769618 2999 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:48:50.769752 kubelet[2999]: E0124 00:48:50.769731 2999 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:48:50.776008 kubelet[2999]: W0124 00:48:50.775711 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jan 24 00:48:50.776133 kubelet[2999]: E0124 00:48:50.776116 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:48:50.780565 kubelet[2999]: I0124 00:48:50.780543 2999 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:48:50.780565 kubelet[2999]: I0124 00:48:50.780561 2999 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:48:50.780674 kubelet[2999]: I0124 00:48:50.780579 2999 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:48:50.786279 kubelet[2999]: I0124 00:48:50.786257 2999 policy_none.go:49] "None policy: Start" Jan 24 00:48:50.786279 kubelet[2999]: I0124 00:48:50.786278 2999 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:48:50.786388 kubelet[2999]: I0124 00:48:50.786291 2999 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:48:50.796385 kubelet[2999]: I0124 00:48:50.796360 2999 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:48:50.796557 kubelet[2999]: I0124 00:48:50.796538 2999 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:48:50.796617 kubelet[2999]: I0124 00:48:50.796554 2999 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:48:50.797704 kubelet[2999]: I0124 00:48:50.797678 2999 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:48:50.800478 kubelet[2999]: E0124 00:48:50.800458 2999 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:48:50.800571 kubelet[2999]: E0124 00:48:50.800504 2999 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-d923855e69\" not found" Jan 24 00:48:50.878335 kubelet[2999]: E0124 00:48:50.878288 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d923855e69\" not found" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:50.881655 kubelet[2999]: E0124 00:48:50.881444 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d923855e69\" not found" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:50.883277 kubelet[2999]: E0124 00:48:50.883255 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d923855e69\" not found" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:50.898309 kubelet[2999]: I0124 00:48:50.898282 2999 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:50.898691 kubelet[2999]: E0124 00:48:50.898658 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.5:6443/api/v1/nodes\": dial tcp 10.200.4.5:6443: connect: connection refused" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:50.937565 kubelet[2999]: E0124 00:48:50.937517 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d923855e69?timeout=10s\": dial tcp 10.200.4.5:6443: connect: connection refused" interval="400ms" Jan 24 00:48:51.039252 kubelet[2999]: I0124 00:48:51.039203 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddb49d070ffca59c40711b4e16caf65c-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" (UID: \"ddb49d070ffca59c40711b4e16caf65c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.039252 kubelet[2999]: I0124 00:48:51.039262 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ddb49d070ffca59c40711b4e16caf65c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" (UID: \"ddb49d070ffca59c40711b4e16caf65c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.039676 kubelet[2999]: I0124 00:48:51.039294 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78da6b16f3baa94815983c9036abe815-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-d923855e69\" (UID: \"78da6b16f3baa94815983c9036abe815\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.039676 kubelet[2999]: I0124 00:48:51.039327 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1e4862d40e4021d5e5b2a5b1d8cabc0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d923855e69\" (UID: \"c1e4862d40e4021d5e5b2a5b1d8cabc0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.039676 kubelet[2999]: I0124 00:48:51.039356 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1e4862d40e4021d5e5b2a5b1d8cabc0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-d923855e69\" (UID: \"c1e4862d40e4021d5e5b2a5b1d8cabc0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.039676 kubelet[2999]: I0124 00:48:51.039414 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddb49d070ffca59c40711b4e16caf65c-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" (UID: \"ddb49d070ffca59c40711b4e16caf65c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.039676 kubelet[2999]: I0124 00:48:51.039443 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ddb49d070ffca59c40711b4e16caf65c-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" (UID: \"ddb49d070ffca59c40711b4e16caf65c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.039864 kubelet[2999]: I0124 00:48:51.039490 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddb49d070ffca59c40711b4e16caf65c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" (UID: \"ddb49d070ffca59c40711b4e16caf65c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.039864 kubelet[2999]: I0124 00:48:51.039517 2999 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1e4862d40e4021d5e5b2a5b1d8cabc0-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d923855e69\" (UID: \"c1e4862d40e4021d5e5b2a5b1d8cabc0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.101330 kubelet[2999]: I0124 00:48:51.101294 2999 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.101720 kubelet[2999]: E0124 00:48:51.101680 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.5:6443/api/v1/nodes\": dial tcp 10.200.4.5:6443: connect: connection refused" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.180686 containerd[1821]: time="2026-01-24T00:48:51.180638171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-d923855e69,Uid:ddb49d070ffca59c40711b4e16caf65c,Namespace:kube-system,Attempt:0,}" Jan 24 00:48:51.183228 containerd[1821]: time="2026-01-24T00:48:51.183197017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-d923855e69,Uid:78da6b16f3baa94815983c9036abe815,Namespace:kube-system,Attempt:0,}" Jan 24 00:48:51.184666 containerd[1821]: time="2026-01-24T00:48:51.184639443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-d923855e69,Uid:c1e4862d40e4021d5e5b2a5b1d8cabc0,Namespace:kube-system,Attempt:0,}" Jan 24 00:48:51.338704 kubelet[2999]: E0124 00:48:51.338570 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d923855e69?timeout=10s\": dial tcp 10.200.4.5:6443: connect: connection refused" interval="800ms" Jan 24 00:48:51.504280 kubelet[2999]: I0124 00:48:51.503956 2999 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.504280 kubelet[2999]: E0124 00:48:51.504250 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.5:6443/api/v1/nodes\": dial tcp 10.200.4.5:6443: connect: connection refused" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:51.766315 kubelet[2999]: W0124 00:48:51.766241 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jan 24 00:48:51.766455 kubelet[2999]: E0124 00:48:51.766323 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:48:51.774736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983170065.mount: Deactivated successfully. Jan 24 00:48:51.793274 containerd[1821]: time="2026-01-24T00:48:51.793227560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:48:51.798712 containerd[1821]: time="2026-01-24T00:48:51.798671817Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:48:51.801251 containerd[1821]: time="2026-01-24T00:48:51.801192344Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 24 00:48:51.804175 containerd[1821]: time="2026-01-24T00:48:51.804135675Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:48:51.807023 containerd[1821]: time="2026-01-24T00:48:51.806989705Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:48:51.810379 containerd[1821]: time="2026-01-24T00:48:51.810343041Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:48:51.813057 containerd[1821]: time="2026-01-24T00:48:51.813006969Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:48:51.818261 containerd[1821]: time="2026-01-24T00:48:51.817287715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:48:51.818408 containerd[1821]: time="2026-01-24T00:48:51.818374926Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 637.647054ms" Jan 24 00:48:51.819739 containerd[1821]: time="2026-01-24T00:48:51.819702540Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 636.442122ms" Jan 24 00:48:51.835117 containerd[1821]: time="2026-01-24T00:48:51.835077603Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 650.377859ms" Jan 24 00:48:51.975567 kubelet[2999]: W0124 00:48:51.975529 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jan 24 00:48:51.976037 kubelet[2999]: E0124 00:48:51.975582 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:48:52.016511 kubelet[2999]: W0124 00:48:52.016354 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-d923855e69&limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jan 24 00:48:52.016511 kubelet[2999]: E0124 00:48:52.016433 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-d923855e69&limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:48:52.082013 kubelet[2999]: W0124 00:48:52.081819 2999 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.5:6443: connect: connection refused Jan 24 00:48:52.082013 kubelet[2999]: E0124 00:48:52.081876 2999 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:48:52.140106 kubelet[2999]: E0124 00:48:52.140050 2999 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-d923855e69?timeout=10s\": dial tcp 10.200.4.5:6443: connect: connection refused" interval="1.6s" Jan 24 00:48:52.306978 kubelet[2999]: I0124 00:48:52.306875 2999 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:52.307440 kubelet[2999]: E0124 00:48:52.307397 2999 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.5:6443/api/v1/nodes\": dial tcp 10.200.4.5:6443: connect: connection refused" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:52.528309 containerd[1821]: time="2026-01-24T00:48:52.527782644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:48:52.528309 containerd[1821]: time="2026-01-24T00:48:52.527853244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:48:52.528309 containerd[1821]: time="2026-01-24T00:48:52.527875845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:52.528309 containerd[1821]: time="2026-01-24T00:48:52.527978246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:52.530670 containerd[1821]: time="2026-01-24T00:48:52.530568273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:48:52.530670 containerd[1821]: time="2026-01-24T00:48:52.530612774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:48:52.530670 containerd[1821]: time="2026-01-24T00:48:52.530633374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:52.530991 containerd[1821]: time="2026-01-24T00:48:52.530725575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:52.531639 containerd[1821]: time="2026-01-24T00:48:52.531551484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:48:52.531639 containerd[1821]: time="2026-01-24T00:48:52.531591084Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:48:52.531639 containerd[1821]: time="2026-01-24T00:48:52.531606384Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:52.531884 containerd[1821]: time="2026-01-24T00:48:52.531687985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:48:52.634313 containerd[1821]: time="2026-01-24T00:48:52.634186471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-d923855e69,Uid:78da6b16f3baa94815983c9036abe815,Namespace:kube-system,Attempt:0,} returns sandbox id \"728c6289f543d2e1c92b71b3e064b0e162d63d08f197d76796ec247b3d7d85e1\"" Jan 24 00:48:52.638527 containerd[1821]: time="2026-01-24T00:48:52.638495417Z" level=info msg="CreateContainer within sandbox \"728c6289f543d2e1c92b71b3e064b0e162d63d08f197d76796ec247b3d7d85e1\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:48:52.654927 containerd[1821]: time="2026-01-24T00:48:52.654877890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-d923855e69,Uid:ddb49d070ffca59c40711b4e16caf65c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4dcf48b7d83e89ec6d4cc9a950880a993a9a309bb2c2242706d62c6c3227db67\"" Jan 24 00:48:52.657699 containerd[1821]: time="2026-01-24T00:48:52.657666620Z" level=info msg="CreateContainer within sandbox \"4dcf48b7d83e89ec6d4cc9a950880a993a9a309bb2c2242706d62c6c3227db67\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:48:52.662967 containerd[1821]: time="2026-01-24T00:48:52.662493671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-d923855e69,Uid:c1e4862d40e4021d5e5b2a5b1d8cabc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"516b465f532a3ebd51ddc3ffb2cd95b2ae44320133482590fb7537e4babf7d50\"" Jan 24 00:48:52.665145 containerd[1821]: time="2026-01-24T00:48:52.665042298Z" level=info msg="CreateContainer within sandbox \"516b465f532a3ebd51ddc3ffb2cd95b2ae44320133482590fb7537e4babf7d50\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:48:52.713841 kubelet[2999]: E0124 00:48:52.713785 2999 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.5:6443: connect: connection refused" logger="UnhandledError" Jan 24 00:48:52.718031 containerd[1821]: time="2026-01-24T00:48:52.717890058Z" level=info msg="CreateContainer within sandbox \"728c6289f543d2e1c92b71b3e064b0e162d63d08f197d76796ec247b3d7d85e1\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"986f71ece89dc8e0335aa1eba09e8660ce7ee4af6fdd4fccec7095de376e12c2\"" Jan 24 00:48:52.719649 containerd[1821]: time="2026-01-24T00:48:52.718591266Z" level=info msg="StartContainer for \"986f71ece89dc8e0335aa1eba09e8660ce7ee4af6fdd4fccec7095de376e12c2\"" Jan 24 00:48:52.759176 containerd[1821]: time="2026-01-24T00:48:52.757988883Z" level=info msg="CreateContainer within sandbox \"4dcf48b7d83e89ec6d4cc9a950880a993a9a309bb2c2242706d62c6c3227db67\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0309ef0b2892dd90dd4dd29f13bf9fbe8b467b8874c85b10905b07d803acc4f1\"" Jan 24 00:48:52.768819 containerd[1821]: time="2026-01-24T00:48:52.763227639Z" level=info msg="StartContainer for \"0309ef0b2892dd90dd4dd29f13bf9fbe8b467b8874c85b10905b07d803acc4f1\"" Jan 24 00:48:52.825065 containerd[1821]: time="2026-01-24T00:48:52.824997493Z" level=info msg="CreateContainer within sandbox \"516b465f532a3ebd51ddc3ffb2cd95b2ae44320133482590fb7537e4babf7d50\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"26190959b146ca9a0047cd0478bf6ce82f86967aded0b7f7276378f142f85d29\"" Jan 24 00:48:52.827243 containerd[1821]: time="2026-01-24T00:48:52.825966403Z" level=info msg="StartContainer for \"26190959b146ca9a0047cd0478bf6ce82f86967aded0b7f7276378f142f85d29\"" Jan 24 00:48:52.844370 containerd[1821]: time="2026-01-24T00:48:52.844229397Z" level=info msg="StartContainer for \"986f71ece89dc8e0335aa1eba09e8660ce7ee4af6fdd4fccec7095de376e12c2\" returns successfully" Jan 24 00:48:52.947618 containerd[1821]: time="2026-01-24T00:48:52.947505391Z" level=info msg="StartContainer for \"0309ef0b2892dd90dd4dd29f13bf9fbe8b467b8874c85b10905b07d803acc4f1\" returns successfully" Jan 24 00:48:52.985626 containerd[1821]: time="2026-01-24T00:48:52.985578295Z" level=info msg="StartContainer for \"26190959b146ca9a0047cd0478bf6ce82f86967aded0b7f7276378f142f85d29\" returns successfully" Jan 24 00:48:53.803796 kubelet[2999]: E0124 00:48:53.803498 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d923855e69\" not found" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:53.806803 kubelet[2999]: E0124 00:48:53.806333 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d923855e69\" not found" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:53.811792 kubelet[2999]: E0124 00:48:53.811217 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d923855e69\" not found" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:53.911732 kubelet[2999]: I0124 00:48:53.910982 2999 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:54.817368 kubelet[2999]: E0124 00:48:54.817327 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d923855e69\" not found" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:54.818857 kubelet[2999]: E0124 00:48:54.817812 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d923855e69\" not found" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:54.819181 kubelet[2999]: E0124 00:48:54.819162 2999 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-d923855e69\" not found" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.424549 kubelet[2999]: E0124 00:48:55.424488 2999 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-d923855e69\" not found" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.516795 kubelet[2999]: I0124 00:48:55.516240 2999 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.535888 kubelet[2999]: I0124 00:48:55.535841 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.565109 kubelet[2999]: E0124 00:48:55.565065 2999 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-d923855e69\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.565109 kubelet[2999]: I0124 00:48:55.565115 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.572621 kubelet[2999]: E0124 00:48:55.572583 2999 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-d923855e69\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.572621 kubelet[2999]: I0124 00:48:55.572618 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.579092 kubelet[2999]: E0124 00:48:55.579053 2999 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.716892 kubelet[2999]: I0124 00:48:55.716860 2999 apiserver.go:52] "Watching apiserver" Jan 24 00:48:55.738208 kubelet[2999]: I0124 00:48:55.738034 2999 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:48:55.812840 kubelet[2999]: I0124 00:48:55.812759 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.816787 kubelet[2999]: I0124 00:48:55.815013 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.817135 kubelet[2999]: E0124 00:48:55.817108 2999 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-d923855e69\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d923855e69" Jan 24 00:48:55.817695 kubelet[2999]: E0124 00:48:55.817668 2999 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-d923855e69\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:56.095338 kubelet[2999]: I0124 00:48:56.093836 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:56.098717 kubelet[2999]: E0124 00:48:56.098315 2999 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:57.619038 systemd[1]: Reloading requested from client PID 3271 ('systemctl') (unit session-9.scope)... Jan 24 00:48:57.619056 systemd[1]: Reloading... Jan 24 00:48:57.647308 kubelet[2999]: I0124 00:48:57.646462 2999 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d923855e69" Jan 24 00:48:57.656034 kubelet[2999]: W0124 00:48:57.655719 2999 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 00:48:57.740857 zram_generator::config[3317]: No configuration found. Jan 24 00:48:57.872010 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:48:57.965074 systemd[1]: Reloading finished in 345 ms. Jan 24 00:48:58.009096 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:58.032735 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:48:58.033194 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:58.046511 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:48:58.179971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:48:58.196310 (kubelet)[3388]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:48:58.238213 kubelet[3388]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:48:58.238805 kubelet[3388]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:48:58.238805 kubelet[3388]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:48:58.238805 kubelet[3388]: I0124 00:48:58.238642 3388 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:48:58.244491 kubelet[3388]: I0124 00:48:58.244457 3388 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 24 00:48:58.244491 kubelet[3388]: I0124 00:48:58.244480 3388 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:48:58.244786 kubelet[3388]: I0124 00:48:58.244756 3388 server.go:954] "Client rotation is on, will bootstrap in background" Jan 24 00:48:58.247804 kubelet[3388]: I0124 00:48:58.246363 3388 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 24 00:48:58.250704 kubelet[3388]: I0124 00:48:58.250681 3388 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:48:58.253884 kubelet[3388]: E0124 00:48:58.253858 3388 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:48:58.253994 kubelet[3388]: I0124 00:48:58.253982 3388 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 24 00:48:58.257525 kubelet[3388]: I0124 00:48:58.257508 3388 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 24 00:48:58.258193 kubelet[3388]: I0124 00:48:58.258148 3388 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:48:58.258526 kubelet[3388]: I0124 00:48:58.258275 3388 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-d923855e69","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 24 00:48:58.258752 kubelet[3388]: I0124 00:48:58.258737 3388 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:48:58.258922 kubelet[3388]: I0124 00:48:58.258910 3388 container_manager_linux.go:304] "Creating device plugin manager" Jan 24 00:48:58.259040 kubelet[3388]: I0124 00:48:58.259029 3388 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:48:58.259277 kubelet[3388]: I0124 00:48:58.259265 3388 kubelet.go:446] "Attempting to sync node with API server" Jan 24 00:48:58.260114 kubelet[3388]: I0124 00:48:58.260099 3388 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:48:58.260227 kubelet[3388]: I0124 00:48:58.260219 3388 kubelet.go:352] "Adding apiserver pod source" Jan 24 00:48:58.261812 kubelet[3388]: I0124 00:48:58.261796 3388 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:48:58.262955 kubelet[3388]: I0124 00:48:58.262938 3388 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:48:58.263547 kubelet[3388]: I0124 00:48:58.263529 3388 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 24 00:48:58.266236 kubelet[3388]: I0124 00:48:58.266222 3388 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 24 00:48:58.266315 kubelet[3388]: I0124 00:48:58.266309 3388 server.go:1287] "Started kubelet" Jan 24 00:48:58.268306 kubelet[3388]: I0124 00:48:58.268287 3388 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:48:58.275164 kubelet[3388]: I0124 00:48:58.275127 3388 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:48:58.277515 kubelet[3388]: I0124 00:48:58.277495 3388 server.go:479] "Adding debug handlers to kubelet server" Jan 24 00:48:58.281438 kubelet[3388]: I0124 00:48:58.281387 3388 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:48:58.281721 kubelet[3388]: I0124 00:48:58.281708 3388 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:48:58.282072 kubelet[3388]: I0124 00:48:58.282053 3388 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:48:58.285790 kubelet[3388]: I0124 00:48:58.284705 3388 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 24 00:48:58.285790 kubelet[3388]: E0124 00:48:58.284965 3388 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-d923855e69\" not found" Jan 24 00:48:58.292505 kubelet[3388]: I0124 00:48:58.292488 3388 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 24 00:48:58.292728 kubelet[3388]: I0124 00:48:58.292717 3388 reconciler.go:26] "Reconciler: start to sync state" Jan 24 00:48:58.295565 kubelet[3388]: I0124 00:48:58.295531 3388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 24 00:48:58.296996 kubelet[3388]: I0124 00:48:58.296979 3388 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 24 00:48:58.297095 kubelet[3388]: I0124 00:48:58.297087 3388 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 24 00:48:58.297179 kubelet[3388]: I0124 00:48:58.297169 3388 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:48:58.297243 kubelet[3388]: I0124 00:48:58.297235 3388 kubelet.go:2382] "Starting kubelet main sync loop" Jan 24 00:48:58.297359 kubelet[3388]: E0124 00:48:58.297342 3388 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:48:58.309189 kubelet[3388]: I0124 00:48:58.309169 3388 factory.go:221] Registration of the containerd container factory successfully Jan 24 00:48:58.309418 kubelet[3388]: I0124 00:48:58.309405 3388 factory.go:221] Registration of the systemd container factory successfully Jan 24 00:48:58.309695 kubelet[3388]: I0124 00:48:58.309671 3388 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:48:58.310818 kubelet[3388]: E0124 00:48:58.310669 3388 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:48:58.373009 kubelet[3388]: I0124 00:48:58.372986 3388 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:48:58.373786 kubelet[3388]: I0124 00:48:58.373163 3388 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:48:58.373786 kubelet[3388]: I0124 00:48:58.373187 3388 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:48:58.373786 kubelet[3388]: I0124 00:48:58.373349 3388 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:48:58.373786 kubelet[3388]: I0124 00:48:58.373359 3388 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:48:58.373786 kubelet[3388]: I0124 00:48:58.373375 3388 policy_none.go:49] "None policy: Start" Jan 24 00:48:58.373786 kubelet[3388]: I0124 00:48:58.373385 3388 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 24 00:48:58.373786 kubelet[3388]: I0124 00:48:58.373393 3388 state_mem.go:35] "Initializing new in-memory state store" Jan 24 00:48:58.373786 kubelet[3388]: I0124 00:48:58.373476 3388 state_mem.go:75] "Updated machine memory state" Jan 24 00:48:58.374799 kubelet[3388]: I0124 00:48:58.374758 3388 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 24 00:48:58.375088 kubelet[3388]: I0124 00:48:58.375073 3388 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:48:58.375216 kubelet[3388]: I0124 00:48:58.375179 3388 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:48:58.376623 kubelet[3388]: I0124 00:48:58.376598 3388 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:48:58.377228 kubelet[3388]: E0124 00:48:58.377120 3388 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:48:58.400499 kubelet[3388]: I0124 00:48:58.398450 3388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.400499 kubelet[3388]: I0124 00:48:58.398757 3388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.400499 kubelet[3388]: I0124 00:48:58.399008 3388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.411881 kubelet[3388]: W0124 00:48:58.411860 3388 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 00:48:58.413231 kubelet[3388]: W0124 00:48:58.413210 3388 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 00:48:58.414088 kubelet[3388]: W0124 00:48:58.414073 3388 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 00:48:58.414211 kubelet[3388]: E0124 00:48:58.414197 3388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-d923855e69\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.479300 kubelet[3388]: I0124 00:48:58.479247 3388 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.490027 kubelet[3388]: I0124 00:48:58.489992 3388 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.490175 kubelet[3388]: I0124 00:48:58.490078 3388 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.493562 kubelet[3388]: I0124 00:48:58.493519 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ddb49d070ffca59c40711b4e16caf65c-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" (UID: \"ddb49d070ffca59c40711b4e16caf65c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.493562 kubelet[3388]: I0124 00:48:58.493564 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ddb49d070ffca59c40711b4e16caf65c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" (UID: \"ddb49d070ffca59c40711b4e16caf65c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.494960 kubelet[3388]: I0124 00:48:58.493588 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ddb49d070ffca59c40711b4e16caf65c-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" (UID: \"ddb49d070ffca59c40711b4e16caf65c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.494960 kubelet[3388]: I0124 00:48:58.493610 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ddb49d070ffca59c40711b4e16caf65c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" (UID: \"ddb49d070ffca59c40711b4e16caf65c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.494960 kubelet[3388]: I0124 00:48:58.493633 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/78da6b16f3baa94815983c9036abe815-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-d923855e69\" (UID: \"78da6b16f3baa94815983c9036abe815\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.494960 kubelet[3388]: I0124 00:48:58.493655 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c1e4862d40e4021d5e5b2a5b1d8cabc0-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d923855e69\" (UID: \"c1e4862d40e4021d5e5b2a5b1d8cabc0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.494960 kubelet[3388]: I0124 00:48:58.493681 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c1e4862d40e4021d5e5b2a5b1d8cabc0-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-d923855e69\" (UID: \"c1e4862d40e4021d5e5b2a5b1d8cabc0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.495161 kubelet[3388]: I0124 00:48:58.493707 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c1e4862d40e4021d5e5b2a5b1d8cabc0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-d923855e69\" (UID: \"c1e4862d40e4021d5e5b2a5b1d8cabc0\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:58.495161 kubelet[3388]: I0124 00:48:58.493732 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ddb49d070ffca59c40711b4e16caf65c-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-d923855e69\" (UID: \"ddb49d070ffca59c40711b4e16caf65c\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" Jan 24 00:48:59.263293 kubelet[3388]: I0124 00:48:59.263216 3388 apiserver.go:52] "Watching apiserver" Jan 24 00:48:59.294457 kubelet[3388]: I0124 00:48:59.293704 3388 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 24 00:48:59.326712 kubelet[3388]: I0124 00:48:59.326656 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-d923855e69" podStartSLOduration=2.326638058 podStartE2EDuration="2.326638058s" podCreationTimestamp="2026-01-24 00:48:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:48:59.325441246 +0000 UTC m=+1.125034563" watchObservedRunningTime="2026-01-24 00:48:59.326638058 +0000 UTC m=+1.126231275" Jan 24 00:48:59.342467 kubelet[3388]: I0124 00:48:59.341822 3388 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:59.353305 kubelet[3388]: I0124 00:48:59.352834 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" podStartSLOduration=1.352811716 podStartE2EDuration="1.352811716s" podCreationTimestamp="2026-01-24 00:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:48:59.337166262 +0000 UTC m=+1.136759479" watchObservedRunningTime="2026-01-24 00:48:59.352811716 +0000 UTC m=+1.152404933" Jan 24 00:48:59.364842 kubelet[3388]: W0124 00:48:59.364515 3388 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 24 00:48:59.364842 kubelet[3388]: E0124 00:48:59.364642 3388 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-d923855e69\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-d923855e69" Jan 24 00:48:59.372782 kubelet[3388]: I0124 00:48:59.372721 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-d923855e69" podStartSLOduration=1.372686511 podStartE2EDuration="1.372686511s" podCreationTimestamp="2026-01-24 00:48:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:48:59.353593523 +0000 UTC m=+1.153186740" watchObservedRunningTime="2026-01-24 00:48:59.372686511 +0000 UTC m=+1.172279728" Jan 24 00:49:02.639907 kubelet[3388]: I0124 00:49:02.639860 3388 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:49:02.640682 containerd[1821]: time="2026-01-24T00:49:02.640258993Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:49:02.641215 kubelet[3388]: I0124 00:49:02.640709 3388 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:49:03.424023 kubelet[3388]: I0124 00:49:03.423958 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/be815cdd-4258-4b4a-9c78-345ea04f8553-xtables-lock\") pod \"kube-proxy-hcntp\" (UID: \"be815cdd-4258-4b4a-9c78-345ea04f8553\") " pod="kube-system/kube-proxy-hcntp" Jan 24 00:49:03.424023 kubelet[3388]: I0124 00:49:03.424007 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnrll\" (UniqueName: \"kubernetes.io/projected/be815cdd-4258-4b4a-9c78-345ea04f8553-kube-api-access-lnrll\") pod \"kube-proxy-hcntp\" (UID: \"be815cdd-4258-4b4a-9c78-345ea04f8553\") " pod="kube-system/kube-proxy-hcntp" Jan 24 00:49:03.424228 kubelet[3388]: I0124 00:49:03.424041 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/be815cdd-4258-4b4a-9c78-345ea04f8553-kube-proxy\") pod \"kube-proxy-hcntp\" (UID: \"be815cdd-4258-4b4a-9c78-345ea04f8553\") " pod="kube-system/kube-proxy-hcntp" Jan 24 00:49:03.424228 kubelet[3388]: I0124 00:49:03.424061 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/be815cdd-4258-4b4a-9c78-345ea04f8553-lib-modules\") pod \"kube-proxy-hcntp\" (UID: \"be815cdd-4258-4b4a-9c78-345ea04f8553\") " pod="kube-system/kube-proxy-hcntp" Jan 24 00:49:03.702168 containerd[1821]: time="2026-01-24T00:49:03.701694724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcntp,Uid:be815cdd-4258-4b4a-9c78-345ea04f8553,Namespace:kube-system,Attempt:0,}" Jan 24 00:49:03.726702 kubelet[3388]: I0124 00:49:03.726659 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkrms\" (UniqueName: \"kubernetes.io/projected/7df6e3d6-aecc-4483-9f9c-b102f3623675-kube-api-access-zkrms\") pod \"tigera-operator-7dcd859c48-652tl\" (UID: \"7df6e3d6-aecc-4483-9f9c-b102f3623675\") " pod="tigera-operator/tigera-operator-7dcd859c48-652tl" Jan 24 00:49:03.726702 kubelet[3388]: I0124 00:49:03.726706 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7df6e3d6-aecc-4483-9f9c-b102f3623675-var-lib-calico\") pod \"tigera-operator-7dcd859c48-652tl\" (UID: \"7df6e3d6-aecc-4483-9f9c-b102f3623675\") " pod="tigera-operator/tigera-operator-7dcd859c48-652tl" Jan 24 00:49:03.815637 containerd[1821]: time="2026-01-24T00:49:03.815515853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:03.815637 containerd[1821]: time="2026-01-24T00:49:03.815568154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:03.815637 containerd[1821]: time="2026-01-24T00:49:03.815597654Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:03.816219 containerd[1821]: time="2026-01-24T00:49:03.815700855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:03.866460 containerd[1821]: time="2026-01-24T00:49:03.866419458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hcntp,Uid:be815cdd-4258-4b4a-9c78-345ea04f8553,Namespace:kube-system,Attempt:0,} returns sandbox id \"841ed2d3720bd4d38beb1260fbc7973f32ba034b1b081bdbe0e1b0029683518a\"" Jan 24 00:49:03.874521 containerd[1821]: time="2026-01-24T00:49:03.873752831Z" level=info msg="CreateContainer within sandbox \"841ed2d3720bd4d38beb1260fbc7973f32ba034b1b081bdbe0e1b0029683518a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:49:03.928888 containerd[1821]: time="2026-01-24T00:49:03.928838478Z" level=info msg="CreateContainer within sandbox \"841ed2d3720bd4d38beb1260fbc7973f32ba034b1b081bdbe0e1b0029683518a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4de39ed8f0c9ab8d0d1841cb1c946ea34ef80cf9c827cbd804463c97c799d602\"" Jan 24 00:49:03.929654 containerd[1821]: time="2026-01-24T00:49:03.929607985Z" level=info msg="StartContainer for \"4de39ed8f0c9ab8d0d1841cb1c946ea34ef80cf9c827cbd804463c97c799d602\"" Jan 24 00:49:03.995799 containerd[1821]: time="2026-01-24T00:49:03.995406938Z" level=info msg="StartContainer for \"4de39ed8f0c9ab8d0d1841cb1c946ea34ef80cf9c827cbd804463c97c799d602\" returns successfully" Jan 24 00:49:04.015327 containerd[1821]: time="2026-01-24T00:49:04.015282935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-652tl,Uid:7df6e3d6-aecc-4483-9f9c-b102f3623675,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:49:04.069280 containerd[1821]: time="2026-01-24T00:49:04.069168970Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:04.069280 containerd[1821]: time="2026-01-24T00:49:04.069220371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:04.069280 containerd[1821]: time="2026-01-24T00:49:04.069233071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:04.069914 containerd[1821]: time="2026-01-24T00:49:04.069339072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:04.138697 containerd[1821]: time="2026-01-24T00:49:04.138629259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-652tl,Uid:7df6e3d6-aecc-4483-9f9c-b102f3623675,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"01c86092e390c69bf9e8864e1a45b9e404cb7abb0b066a2e0659611cba15dab7\"" Jan 24 00:49:04.141111 containerd[1821]: time="2026-01-24T00:49:04.141074283Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:49:05.642796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1236947431.mount: Deactivated successfully. Jan 24 00:49:06.338353 containerd[1821]: time="2026-01-24T00:49:06.338299883Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:06.341079 containerd[1821]: time="2026-01-24T00:49:06.340878409Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:49:06.344938 containerd[1821]: time="2026-01-24T00:49:06.343696036Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:06.348847 containerd[1821]: time="2026-01-24T00:49:06.348086980Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:06.348847 containerd[1821]: time="2026-01-24T00:49:06.348692486Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.207583002s" Jan 24 00:49:06.348847 containerd[1821]: time="2026-01-24T00:49:06.348724586Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:49:06.352010 containerd[1821]: time="2026-01-24T00:49:06.351975019Z" level=info msg="CreateContainer within sandbox \"01c86092e390c69bf9e8864e1a45b9e404cb7abb0b066a2e0659611cba15dab7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:49:06.381857 containerd[1821]: time="2026-01-24T00:49:06.381816415Z" level=info msg="CreateContainer within sandbox \"01c86092e390c69bf9e8864e1a45b9e404cb7abb0b066a2e0659611cba15dab7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9feed20cc41fa4adaf52f0f24799f233b242510e05021f754ba9a1ac4156d69e\"" Jan 24 00:49:06.382625 containerd[1821]: time="2026-01-24T00:49:06.382406821Z" level=info msg="StartContainer for \"9feed20cc41fa4adaf52f0f24799f233b242510e05021f754ba9a1ac4156d69e\"" Jan 24 00:49:06.451459 containerd[1821]: time="2026-01-24T00:49:06.451405705Z" level=info msg="StartContainer for \"9feed20cc41fa4adaf52f0f24799f233b242510e05021f754ba9a1ac4156d69e\" returns successfully" Jan 24 00:49:07.301657 kubelet[3388]: I0124 00:49:07.301545 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hcntp" podStartSLOduration=4.301520339 podStartE2EDuration="4.301520339s" podCreationTimestamp="2026-01-24 00:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:49:04.363918394 +0000 UTC m=+6.163511611" watchObservedRunningTime="2026-01-24 00:49:07.301520339 +0000 UTC m=+9.101113656" Jan 24 00:49:07.397725 kubelet[3388]: I0124 00:49:07.397650 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-652tl" podStartSLOduration=2.187749668 podStartE2EDuration="4.397628393s" podCreationTimestamp="2026-01-24 00:49:03 +0000 UTC" firstStartedPulling="2026-01-24 00:49:04.139832171 +0000 UTC m=+5.939425388" lastFinishedPulling="2026-01-24 00:49:06.349710896 +0000 UTC m=+8.149304113" observedRunningTime="2026-01-24 00:49:07.397418991 +0000 UTC m=+9.197012308" watchObservedRunningTime="2026-01-24 00:49:07.397628393 +0000 UTC m=+9.197221610" Jan 24 00:49:12.951223 sudo[2382]: pam_unix(sudo:session): session closed for user root Jan 24 00:49:13.048936 sshd[2378]: pam_unix(sshd:session): session closed for user core Jan 24 00:49:13.058246 systemd[1]: sshd@6-10.200.4.5:22-10.200.16.10:37758.service: Deactivated successfully. Jan 24 00:49:13.074860 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:49:13.076969 systemd-logind[1792]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:49:13.080190 systemd-logind[1792]: Removed session 9. Jan 24 00:49:18.422978 kubelet[3388]: I0124 00:49:18.422915 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1e4b2c56-2a1d-478a-b199-e8d614f3273d-typha-certs\") pod \"calico-typha-5b75c7b6f7-8qcf7\" (UID: \"1e4b2c56-2a1d-478a-b199-e8d614f3273d\") " pod="calico-system/calico-typha-5b75c7b6f7-8qcf7" Jan 24 00:49:18.423645 kubelet[3388]: I0124 00:49:18.423564 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t99p5\" (UniqueName: \"kubernetes.io/projected/1e4b2c56-2a1d-478a-b199-e8d614f3273d-kube-api-access-t99p5\") pod \"calico-typha-5b75c7b6f7-8qcf7\" (UID: \"1e4b2c56-2a1d-478a-b199-e8d614f3273d\") " pod="calico-system/calico-typha-5b75c7b6f7-8qcf7" Jan 24 00:49:18.423645 kubelet[3388]: I0124 00:49:18.423609 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e4b2c56-2a1d-478a-b199-e8d614f3273d-tigera-ca-bundle\") pod \"calico-typha-5b75c7b6f7-8qcf7\" (UID: \"1e4b2c56-2a1d-478a-b199-e8d614f3273d\") " pod="calico-system/calico-typha-5b75c7b6f7-8qcf7" Jan 24 00:49:18.721671 containerd[1821]: time="2026-01-24T00:49:18.721625816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b75c7b6f7-8qcf7,Uid:1e4b2c56-2a1d-478a-b199-e8d614f3273d,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:18.727529 kubelet[3388]: I0124 00:49:18.725862 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3a7867c9-c7b4-4001-9709-d2731eea0fd1-flexvol-driver-host\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.727529 kubelet[3388]: I0124 00:49:18.725916 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3a7867c9-c7b4-4001-9709-d2731eea0fd1-node-certs\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.727529 kubelet[3388]: I0124 00:49:18.725946 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3a7867c9-c7b4-4001-9709-d2731eea0fd1-var-run-calico\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.727529 kubelet[3388]: I0124 00:49:18.725968 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a7867c9-c7b4-4001-9709-d2731eea0fd1-xtables-lock\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.727529 kubelet[3388]: I0124 00:49:18.725990 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a7867c9-c7b4-4001-9709-d2731eea0fd1-var-lib-calico\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.727844 kubelet[3388]: I0124 00:49:18.726013 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a7867c9-c7b4-4001-9709-d2731eea0fd1-tigera-ca-bundle\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.727844 kubelet[3388]: I0124 00:49:18.726036 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3a7867c9-c7b4-4001-9709-d2731eea0fd1-policysync\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.727844 kubelet[3388]: I0124 00:49:18.726056 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a7867c9-c7b4-4001-9709-d2731eea0fd1-lib-modules\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.727844 kubelet[3388]: I0124 00:49:18.726079 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrd8g\" (UniqueName: \"kubernetes.io/projected/3a7867c9-c7b4-4001-9709-d2731eea0fd1-kube-api-access-rrd8g\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.727844 kubelet[3388]: I0124 00:49:18.726101 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3a7867c9-c7b4-4001-9709-d2731eea0fd1-cni-bin-dir\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.728053 kubelet[3388]: I0124 00:49:18.726122 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3a7867c9-c7b4-4001-9709-d2731eea0fd1-cni-log-dir\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.728053 kubelet[3388]: I0124 00:49:18.726146 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3a7867c9-c7b4-4001-9709-d2731eea0fd1-cni-net-dir\") pod \"calico-node-229md\" (UID: \"3a7867c9-c7b4-4001-9709-d2731eea0fd1\") " pod="calico-system/calico-node-229md" Jan 24 00:49:18.760789 kubelet[3388]: E0124 00:49:18.758988 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:49:18.827205 kubelet[3388]: I0124 00:49:18.827150 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/52d6adb2-e5fc-4ea6-8c92-021d49b0142f-socket-dir\") pod \"csi-node-driver-w66c2\" (UID: \"52d6adb2-e5fc-4ea6-8c92-021d49b0142f\") " pod="calico-system/csi-node-driver-w66c2" Jan 24 00:49:19.824863 kubelet[3388]: I0124 00:49:18.827302 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52d6adb2-e5fc-4ea6-8c92-021d49b0142f-kubelet-dir\") pod \"csi-node-driver-w66c2\" (UID: \"52d6adb2-e5fc-4ea6-8c92-021d49b0142f\") " pod="calico-system/csi-node-driver-w66c2" Jan 24 00:49:19.824863 kubelet[3388]: I0124 00:49:18.827328 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/52d6adb2-e5fc-4ea6-8c92-021d49b0142f-registration-dir\") pod \"csi-node-driver-w66c2\" (UID: \"52d6adb2-e5fc-4ea6-8c92-021d49b0142f\") " pod="calico-system/csi-node-driver-w66c2" Jan 24 00:49:19.824863 kubelet[3388]: I0124 00:49:18.827352 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7fq9\" (UniqueName: \"kubernetes.io/projected/52d6adb2-e5fc-4ea6-8c92-021d49b0142f-kube-api-access-t7fq9\") pod \"csi-node-driver-w66c2\" (UID: \"52d6adb2-e5fc-4ea6-8c92-021d49b0142f\") " pod="calico-system/csi-node-driver-w66c2" Jan 24 00:49:19.824863 kubelet[3388]: I0124 00:49:18.827418 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/52d6adb2-e5fc-4ea6-8c92-021d49b0142f-varrun\") pod \"csi-node-driver-w66c2\" (UID: \"52d6adb2-e5fc-4ea6-8c92-021d49b0142f\") " pod="calico-system/csi-node-driver-w66c2" Jan 24 00:49:19.844934 kubelet[3388]: E0124 00:49:19.844891 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:19.845153 kubelet[3388]: W0124 00:49:19.844999 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:19.845153 kubelet[3388]: E0124 00:49:19.845042 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:19.855657 kubelet[3388]: E0124 00:49:19.850733 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:19.855657 kubelet[3388]: W0124 00:49:19.850754 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:19.855657 kubelet[3388]: E0124 00:49:19.850790 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:19.858979 kubelet[3388]: E0124 00:49:19.858948 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:19.859101 kubelet[3388]: W0124 00:49:19.859086 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:19.859215 kubelet[3388]: E0124 00:49:19.859170 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:19.946873 containerd[1821]: time="2026-01-24T00:49:19.946795892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:19.947428 containerd[1821]: time="2026-01-24T00:49:19.946974193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:19.947428 containerd[1821]: time="2026-01-24T00:49:19.947385598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:19.947683 containerd[1821]: time="2026-01-24T00:49:19.947638400Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:20.006041 containerd[1821]: time="2026-01-24T00:49:20.005646710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b75c7b6f7-8qcf7,Uid:1e4b2c56-2a1d-478a-b199-e8d614f3273d,Namespace:calico-system,Attempt:0,} returns sandbox id \"541843584dfb49e67240bc50a37708be1d02535591ea42e5228e21f18c66e96c\"" Jan 24 00:49:20.007350 containerd[1821]: time="2026-01-24T00:49:20.007309628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:49:20.126791 containerd[1821]: time="2026-01-24T00:49:20.126655982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-229md,Uid:3a7867c9-c7b4-4001-9709-d2731eea0fd1,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:20.174385 containerd[1821]: time="2026-01-24T00:49:20.174087580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:20.174385 containerd[1821]: time="2026-01-24T00:49:20.174147281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:20.174385 containerd[1821]: time="2026-01-24T00:49:20.174193581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:20.175111 containerd[1821]: time="2026-01-24T00:49:20.174425084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:20.213524 containerd[1821]: time="2026-01-24T00:49:20.213385593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-229md,Uid:3a7867c9-c7b4-4001-9709-d2731eea0fd1,Namespace:calico-system,Attempt:0,} returns sandbox id \"fdc8552aacbce4d9304a5b4f42bd7cbf06cb952447400adf0a609404361a7641\"" Jan 24 00:49:20.299426 kubelet[3388]: E0124 00:49:20.298847 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:49:21.198690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount722190087.mount: Deactivated successfully. Jan 24 00:49:22.259146 containerd[1821]: time="2026-01-24T00:49:22.259090791Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:22.261962 containerd[1821]: time="2026-01-24T00:49:22.261851020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:49:22.265327 containerd[1821]: time="2026-01-24T00:49:22.265056554Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:22.271111 containerd[1821]: time="2026-01-24T00:49:22.271076917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:22.271743 containerd[1821]: time="2026-01-24T00:49:22.271706924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.264354696s" Jan 24 00:49:22.271869 containerd[1821]: time="2026-01-24T00:49:22.271749424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:49:22.273624 containerd[1821]: time="2026-01-24T00:49:22.273174939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:49:22.286784 containerd[1821]: time="2026-01-24T00:49:22.284824962Z" level=info msg="CreateContainer within sandbox \"541843584dfb49e67240bc50a37708be1d02535591ea42e5228e21f18c66e96c\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:49:22.298677 kubelet[3388]: E0124 00:49:22.298639 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:49:22.327353 containerd[1821]: time="2026-01-24T00:49:22.327317208Z" level=info msg="CreateContainer within sandbox \"541843584dfb49e67240bc50a37708be1d02535591ea42e5228e21f18c66e96c\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5f43d613770a01c46b25ca6ea57bea45a3727f6697d30ff3b501b328ba14d3ad\"" Jan 24 00:49:22.328119 containerd[1821]: time="2026-01-24T00:49:22.327882414Z" level=info msg="StartContainer for \"5f43d613770a01c46b25ca6ea57bea45a3727f6697d30ff3b501b328ba14d3ad\"" Jan 24 00:49:22.412113 containerd[1821]: time="2026-01-24T00:49:22.412068999Z" level=info msg="StartContainer for \"5f43d613770a01c46b25ca6ea57bea45a3727f6697d30ff3b501b328ba14d3ad\" returns successfully" Jan 24 00:49:23.438844 kubelet[3388]: I0124 00:49:23.437599 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b75c7b6f7-8qcf7" podStartSLOduration=3.171419761 podStartE2EDuration="5.437578676s" podCreationTimestamp="2026-01-24 00:49:18 +0000 UTC" firstStartedPulling="2026-01-24 00:49:20.006873923 +0000 UTC m=+21.806467140" lastFinishedPulling="2026-01-24 00:49:22.273032838 +0000 UTC m=+24.072626055" observedRunningTime="2026-01-24 00:49:23.434964348 +0000 UTC m=+25.234557665" watchObservedRunningTime="2026-01-24 00:49:23.437578676 +0000 UTC m=+25.237171993" Jan 24 00:49:23.447027 kubelet[3388]: E0124 00:49:23.446780 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.447027 kubelet[3388]: W0124 00:49:23.446808 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.447027 kubelet[3388]: E0124 00:49:23.446833 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.447559 kubelet[3388]: E0124 00:49:23.447196 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.447559 kubelet[3388]: W0124 00:49:23.447210 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.447559 kubelet[3388]: E0124 00:49:23.447229 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.447869 kubelet[3388]: E0124 00:49:23.447630 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.447869 kubelet[3388]: W0124 00:49:23.447643 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.447869 kubelet[3388]: E0124 00:49:23.447659 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.449105 kubelet[3388]: E0124 00:49:23.449078 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.449105 kubelet[3388]: W0124 00:49:23.449097 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.449349 kubelet[3388]: E0124 00:49:23.449116 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.450386 kubelet[3388]: E0124 00:49:23.450079 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.450386 kubelet[3388]: W0124 00:49:23.450094 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.450386 kubelet[3388]: E0124 00:49:23.450110 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.451368 kubelet[3388]: E0124 00:49:23.450710 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.451368 kubelet[3388]: W0124 00:49:23.450725 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.451750 kubelet[3388]: E0124 00:49:23.450742 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.451999 kubelet[3388]: E0124 00:49:23.451978 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.451999 kubelet[3388]: W0124 00:49:23.451996 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.452115 kubelet[3388]: E0124 00:49:23.452012 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.452443 kubelet[3388]: E0124 00:49:23.452267 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.452443 kubelet[3388]: W0124 00:49:23.452282 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.452443 kubelet[3388]: E0124 00:49:23.452297 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.454272 kubelet[3388]: E0124 00:49:23.452760 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.454272 kubelet[3388]: W0124 00:49:23.452794 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.454272 kubelet[3388]: E0124 00:49:23.452810 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.454272 kubelet[3388]: E0124 00:49:23.453126 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.454272 kubelet[3388]: W0124 00:49:23.453136 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.454272 kubelet[3388]: E0124 00:49:23.453149 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.454272 kubelet[3388]: E0124 00:49:23.453325 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.454272 kubelet[3388]: W0124 00:49:23.453333 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.454272 kubelet[3388]: E0124 00:49:23.453344 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.454272 kubelet[3388]: E0124 00:49:23.453520 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.454699 kubelet[3388]: W0124 00:49:23.453529 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.454699 kubelet[3388]: E0124 00:49:23.453541 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.454699 kubelet[3388]: E0124 00:49:23.453733 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.454699 kubelet[3388]: W0124 00:49:23.453932 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.454699 kubelet[3388]: E0124 00:49:23.453949 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.454699 kubelet[3388]: E0124 00:49:23.454292 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.454699 kubelet[3388]: W0124 00:49:23.454304 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.454699 kubelet[3388]: E0124 00:49:23.454317 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.454699 kubelet[3388]: E0124 00:49:23.454585 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.454699 kubelet[3388]: W0124 00:49:23.454596 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.455113 kubelet[3388]: E0124 00:49:23.454611 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.463640 kubelet[3388]: E0124 00:49:23.463481 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.463640 kubelet[3388]: W0124 00:49:23.463501 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.463640 kubelet[3388]: E0124 00:49:23.463518 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.464684 kubelet[3388]: E0124 00:49:23.464484 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.464684 kubelet[3388]: W0124 00:49:23.464501 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.464684 kubelet[3388]: E0124 00:49:23.464517 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.466262 kubelet[3388]: E0124 00:49:23.466156 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.466262 kubelet[3388]: W0124 00:49:23.466176 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.466262 kubelet[3388]: E0124 00:49:23.466197 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.466782 kubelet[3388]: E0124 00:49:23.466597 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.466782 kubelet[3388]: W0124 00:49:23.466610 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.466782 kubelet[3388]: E0124 00:49:23.466626 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.467176 kubelet[3388]: E0124 00:49:23.466870 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.467176 kubelet[3388]: W0124 00:49:23.466881 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.467176 kubelet[3388]: E0124 00:49:23.466894 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.467176 kubelet[3388]: E0124 00:49:23.467090 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.467176 kubelet[3388]: W0124 00:49:23.467101 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.467176 kubelet[3388]: E0124 00:49:23.467114 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.467497 kubelet[3388]: E0124 00:49:23.467420 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.467497 kubelet[3388]: W0124 00:49:23.467432 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.467497 kubelet[3388]: E0124 00:49:23.467446 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.468753 kubelet[3388]: E0124 00:49:23.468074 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.468753 kubelet[3388]: W0124 00:49:23.468091 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.468753 kubelet[3388]: E0124 00:49:23.468109 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.468753 kubelet[3388]: E0124 00:49:23.468616 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.468753 kubelet[3388]: W0124 00:49:23.468628 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.468753 kubelet[3388]: E0124 00:49:23.468690 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.469632 kubelet[3388]: E0124 00:49:23.469246 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.469632 kubelet[3388]: W0124 00:49:23.469261 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.469632 kubelet[3388]: E0124 00:49:23.469288 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.469884 kubelet[3388]: E0124 00:49:23.469683 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.469884 kubelet[3388]: W0124 00:49:23.469695 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.469884 kubelet[3388]: E0124 00:49:23.469722 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.470016 kubelet[3388]: E0124 00:49:23.469960 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.470016 kubelet[3388]: W0124 00:49:23.469971 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.470638 kubelet[3388]: E0124 00:49:23.470168 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.470638 kubelet[3388]: E0124 00:49:23.470336 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.470638 kubelet[3388]: W0124 00:49:23.470347 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.470638 kubelet[3388]: E0124 00:49:23.470364 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.470638 kubelet[3388]: E0124 00:49:23.470572 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.470638 kubelet[3388]: W0124 00:49:23.470600 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.471200 kubelet[3388]: E0124 00:49:23.471170 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.471654 kubelet[3388]: E0124 00:49:23.471637 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.471654 kubelet[3388]: W0124 00:49:23.471650 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.471794 kubelet[3388]: E0124 00:49:23.471668 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.472592 kubelet[3388]: E0124 00:49:23.472575 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.472592 kubelet[3388]: W0124 00:49:23.472591 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.472707 kubelet[3388]: E0124 00:49:23.472629 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.473539 kubelet[3388]: E0124 00:49:23.473378 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.473539 kubelet[3388]: W0124 00:49:23.473393 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.473539 kubelet[3388]: E0124 00:49:23.473418 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.473721 kubelet[3388]: E0124 00:49:23.473681 3388 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:49:23.473721 kubelet[3388]: W0124 00:49:23.473693 3388 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:49:23.473721 kubelet[3388]: E0124 00:49:23.473707 3388 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:49:23.504619 containerd[1821]: time="2026-01-24T00:49:23.504563580Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:23.507235 containerd[1821]: time="2026-01-24T00:49:23.507076906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:49:23.513271 containerd[1821]: time="2026-01-24T00:49:23.512092059Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:23.517628 containerd[1821]: time="2026-01-24T00:49:23.516843909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:23.517628 containerd[1821]: time="2026-01-24T00:49:23.517483415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.244272576s" Jan 24 00:49:23.517628 containerd[1821]: time="2026-01-24T00:49:23.517525616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:49:23.521170 containerd[1821]: time="2026-01-24T00:49:23.521142154Z" level=info msg="CreateContainer within sandbox \"fdc8552aacbce4d9304a5b4f42bd7cbf06cb952447400adf0a609404361a7641\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:49:23.554836 containerd[1821]: time="2026-01-24T00:49:23.554798208Z" level=info msg="CreateContainer within sandbox \"fdc8552aacbce4d9304a5b4f42bd7cbf06cb952447400adf0a609404361a7641\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d252b39709bcce1276f187eccb1ba5aaf7cc38befee0d705a31764a15df1543e\"" Jan 24 00:49:23.556841 containerd[1821]: time="2026-01-24T00:49:23.555335113Z" level=info msg="StartContainer for \"d252b39709bcce1276f187eccb1ba5aaf7cc38befee0d705a31764a15df1543e\"" Jan 24 00:49:23.625581 containerd[1821]: time="2026-01-24T00:49:23.625536651Z" level=info msg="StartContainer for \"d252b39709bcce1276f187eccb1ba5aaf7cc38befee0d705a31764a15df1543e\" returns successfully" Jan 24 00:49:23.661677 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d252b39709bcce1276f187eccb1ba5aaf7cc38befee0d705a31764a15df1543e-rootfs.mount: Deactivated successfully. Jan 24 00:49:24.298218 kubelet[3388]: E0124 00:49:24.298040 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:49:24.423113 kubelet[3388]: I0124 00:49:24.423070 3388 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:49:25.246174 containerd[1821]: time="2026-01-24T00:49:25.246105981Z" level=info msg="shim disconnected" id=d252b39709bcce1276f187eccb1ba5aaf7cc38befee0d705a31764a15df1543e namespace=k8s.io Jan 24 00:49:25.246174 containerd[1821]: time="2026-01-24T00:49:25.246168382Z" level=warning msg="cleaning up after shim disconnected" id=d252b39709bcce1276f187eccb1ba5aaf7cc38befee0d705a31764a15df1543e namespace=k8s.io Jan 24 00:49:25.246174 containerd[1821]: time="2026-01-24T00:49:25.246179682Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:49:25.429100 containerd[1821]: time="2026-01-24T00:49:25.427939092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:49:26.298113 kubelet[3388]: E0124 00:49:26.297966 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:49:28.299850 kubelet[3388]: E0124 00:49:28.298487 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:49:28.634734 containerd[1821]: time="2026-01-24T00:49:28.634268172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:28.640862 containerd[1821]: time="2026-01-24T00:49:28.640238934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:49:28.645594 containerd[1821]: time="2026-01-24T00:49:28.645545190Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:28.652212 containerd[1821]: time="2026-01-24T00:49:28.650839545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:28.652212 containerd[1821]: time="2026-01-24T00:49:28.651748454Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.223760161s" Jan 24 00:49:28.652212 containerd[1821]: time="2026-01-24T00:49:28.651813955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:49:28.654587 containerd[1821]: time="2026-01-24T00:49:28.654549984Z" level=info msg="CreateContainer within sandbox \"fdc8552aacbce4d9304a5b4f42bd7cbf06cb952447400adf0a609404361a7641\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:49:28.714090 containerd[1821]: time="2026-01-24T00:49:28.714040704Z" level=info msg="CreateContainer within sandbox \"fdc8552aacbce4d9304a5b4f42bd7cbf06cb952447400adf0a609404361a7641\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d7893d605ec58717d6e71f6a7c95edc450ac41efc90f637e48e04d4348d37a4c\"" Jan 24 00:49:28.714810 containerd[1821]: time="2026-01-24T00:49:28.714609610Z" level=info msg="StartContainer for \"d7893d605ec58717d6e71f6a7c95edc450ac41efc90f637e48e04d4348d37a4c\"" Jan 24 00:49:28.780415 containerd[1821]: time="2026-01-24T00:49:28.780368195Z" level=info msg="StartContainer for \"d7893d605ec58717d6e71f6a7c95edc450ac41efc90f637e48e04d4348d37a4c\" returns successfully" Jan 24 00:49:30.300938 kubelet[3388]: E0124 00:49:30.299943 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:49:30.506575 containerd[1821]: time="2026-01-24T00:49:30.506264686Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:49:30.530725 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7893d605ec58717d6e71f6a7c95edc450ac41efc90f637e48e04d4348d37a4c-rootfs.mount: Deactivated successfully. Jan 24 00:49:30.606495 kubelet[3388]: I0124 00:49:30.606162 3388 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 24 00:49:30.767531 kubelet[3388]: I0124 00:49:30.767472 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnxmh\" (UniqueName: \"kubernetes.io/projected/ff942fbf-7f43-4ef8-9f92-2e10cfb795ba-kube-api-access-fnxmh\") pod \"coredns-668d6bf9bc-rznsh\" (UID: \"ff942fbf-7f43-4ef8-9f92-2e10cfb795ba\") " pod="kube-system/coredns-668d6bf9bc-rznsh" Jan 24 00:49:30.767531 kubelet[3388]: I0124 00:49:30.767532 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6da5a353-0459-4899-8898-8a79910e38eb-config\") pod \"goldmane-666569f655-7pw49\" (UID: \"6da5a353-0459-4899-8898-8a79910e38eb\") " pod="calico-system/goldmane-666569f655-7pw49" Jan 24 00:49:30.767981 kubelet[3388]: I0124 00:49:30.767562 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ce17dab6-c6ae-4d47-91e5-8ead47b1af74-calico-apiserver-certs\") pod \"calico-apiserver-7d87ffcbb4-hrz4c\" (UID: \"ce17dab6-c6ae-4d47-91e5-8ead47b1af74\") " pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" Jan 24 00:49:30.767981 kubelet[3388]: I0124 00:49:30.767592 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znj2f\" (UniqueName: \"kubernetes.io/projected/ce17dab6-c6ae-4d47-91e5-8ead47b1af74-kube-api-access-znj2f\") pod \"calico-apiserver-7d87ffcbb4-hrz4c\" (UID: \"ce17dab6-c6ae-4d47-91e5-8ead47b1af74\") " pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" Jan 24 00:49:30.767981 kubelet[3388]: I0124 00:49:30.767617 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrln\" (UniqueName: \"kubernetes.io/projected/fceb2be0-173e-4232-b91f-73ea1995bd45-kube-api-access-bjrln\") pod \"whisker-7bd74fcf67-tzlpc\" (UID: \"fceb2be0-173e-4232-b91f-73ea1995bd45\") " pod="calico-system/whisker-7bd74fcf67-tzlpc" Jan 24 00:49:30.767981 kubelet[3388]: I0124 00:49:30.767639 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/feb072f7-3316-4b11-9780-0976f355dc5e-calico-apiserver-certs\") pod \"calico-apiserver-7d87ffcbb4-d45k5\" (UID: \"feb072f7-3316-4b11-9780-0976f355dc5e\") " pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" Jan 24 00:49:30.767981 kubelet[3388]: I0124 00:49:30.767663 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8jrp\" (UniqueName: \"kubernetes.io/projected/feb072f7-3316-4b11-9780-0976f355dc5e-kube-api-access-z8jrp\") pod \"calico-apiserver-7d87ffcbb4-d45k5\" (UID: \"feb072f7-3316-4b11-9780-0976f355dc5e\") " pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" Jan 24 00:49:30.768130 kubelet[3388]: I0124 00:49:30.767685 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmzph\" (UniqueName: \"kubernetes.io/projected/6da5a353-0459-4899-8898-8a79910e38eb-kube-api-access-fmzph\") pod \"goldmane-666569f655-7pw49\" (UID: \"6da5a353-0459-4899-8898-8a79910e38eb\") " pod="calico-system/goldmane-666569f655-7pw49" Jan 24 00:49:30.768130 kubelet[3388]: I0124 00:49:30.767714 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ff942fbf-7f43-4ef8-9f92-2e10cfb795ba-config-volume\") pod \"coredns-668d6bf9bc-rznsh\" (UID: \"ff942fbf-7f43-4ef8-9f92-2e10cfb795ba\") " pod="kube-system/coredns-668d6bf9bc-rznsh" Jan 24 00:49:30.768130 kubelet[3388]: I0124 00:49:30.767737 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fceb2be0-173e-4232-b91f-73ea1995bd45-whisker-backend-key-pair\") pod \"whisker-7bd74fcf67-tzlpc\" (UID: \"fceb2be0-173e-4232-b91f-73ea1995bd45\") " pod="calico-system/whisker-7bd74fcf67-tzlpc" Jan 24 00:49:30.768130 kubelet[3388]: I0124 00:49:30.767758 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fceb2be0-173e-4232-b91f-73ea1995bd45-whisker-ca-bundle\") pod \"whisker-7bd74fcf67-tzlpc\" (UID: \"fceb2be0-173e-4232-b91f-73ea1995bd45\") " pod="calico-system/whisker-7bd74fcf67-tzlpc" Jan 24 00:49:30.768130 kubelet[3388]: I0124 00:49:30.767809 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6da5a353-0459-4899-8898-8a79910e38eb-goldmane-ca-bundle\") pod \"goldmane-666569f655-7pw49\" (UID: \"6da5a353-0459-4899-8898-8a79910e38eb\") " pod="calico-system/goldmane-666569f655-7pw49" Jan 24 00:49:30.768252 kubelet[3388]: I0124 00:49:30.767840 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6da5a353-0459-4899-8898-8a79910e38eb-goldmane-key-pair\") pod \"goldmane-666569f655-7pw49\" (UID: \"6da5a353-0459-4899-8898-8a79910e38eb\") " pod="calico-system/goldmane-666569f655-7pw49" Jan 24 00:49:31.728260 kubelet[3388]: I0124 00:49:30.868269 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37cb1a00-f2a5-4886-98cf-7e9aeba0026f-tigera-ca-bundle\") pod \"calico-kube-controllers-c5bf95d6-rk94n\" (UID: \"37cb1a00-f2a5-4886-98cf-7e9aeba0026f\") " pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" Jan 24 00:49:31.728260 kubelet[3388]: I0124 00:49:30.868384 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7b5279fd-856f-4138-b3c1-0703370fedaa-config-volume\") pod \"coredns-668d6bf9bc-cdd95\" (UID: \"7b5279fd-856f-4138-b3c1-0703370fedaa\") " pod="kube-system/coredns-668d6bf9bc-cdd95" Jan 24 00:49:31.728260 kubelet[3388]: I0124 00:49:30.868403 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2lmk\" (UniqueName: \"kubernetes.io/projected/7b5279fd-856f-4138-b3c1-0703370fedaa-kube-api-access-h2lmk\") pod \"coredns-668d6bf9bc-cdd95\" (UID: \"7b5279fd-856f-4138-b3c1-0703370fedaa\") " pod="kube-system/coredns-668d6bf9bc-cdd95" Jan 24 00:49:31.728260 kubelet[3388]: I0124 00:49:30.868466 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n6cd\" (UniqueName: \"kubernetes.io/projected/37cb1a00-f2a5-4886-98cf-7e9aeba0026f-kube-api-access-2n6cd\") pod \"calico-kube-controllers-c5bf95d6-rk94n\" (UID: \"37cb1a00-f2a5-4886-98cf-7e9aeba0026f\") " pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" Jan 24 00:49:31.834203 containerd[1821]: time="2026-01-24T00:49:31.834038827Z" level=info msg="shim disconnected" id=d7893d605ec58717d6e71f6a7c95edc450ac41efc90f637e48e04d4348d37a4c namespace=k8s.io Jan 24 00:49:31.834203 containerd[1821]: time="2026-01-24T00:49:31.834107028Z" level=warning msg="cleaning up after shim disconnected" id=d7893d605ec58717d6e71f6a7c95edc450ac41efc90f637e48e04d4348d37a4c namespace=k8s.io Jan 24 00:49:31.834203 containerd[1821]: time="2026-01-24T00:49:31.834120028Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:49:31.875861 containerd[1821]: time="2026-01-24T00:49:31.875818263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7pw49,Uid:6da5a353-0459-4899-8898-8a79910e38eb,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:31.884366 containerd[1821]: time="2026-01-24T00:49:31.884329152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd74fcf67-tzlpc,Uid:fceb2be0-173e-4232-b91f-73ea1995bd45,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:31.885654 containerd[1821]: time="2026-01-24T00:49:31.885623865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87ffcbb4-hrz4c,Uid:ce17dab6-c6ae-4d47-91e5-8ead47b1af74,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:49:31.888393 containerd[1821]: time="2026-01-24T00:49:31.888365494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87ffcbb4-d45k5,Uid:feb072f7-3316-4b11-9780-0976f355dc5e,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:49:31.894112 containerd[1821]: time="2026-01-24T00:49:31.894084853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cdd95,Uid:7b5279fd-856f-4138-b3c1-0703370fedaa,Namespace:kube-system,Attempt:0,}" Jan 24 00:49:31.894674 containerd[1821]: time="2026-01-24T00:49:31.894631559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5bf95d6-rk94n,Uid:37cb1a00-f2a5-4886-98cf-7e9aeba0026f,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:32.146852 containerd[1821]: time="2026-01-24T00:49:32.145942279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rznsh,Uid:ff942fbf-7f43-4ef8-9f92-2e10cfb795ba,Namespace:kube-system,Attempt:0,}" Jan 24 00:49:32.147525 containerd[1821]: time="2026-01-24T00:49:32.147435394Z" level=error msg="Failed to destroy network for sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.148735 containerd[1821]: time="2026-01-24T00:49:32.148686707Z" level=error msg="encountered an error cleaning up failed sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.148924 containerd[1821]: time="2026-01-24T00:49:32.148892009Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7pw49,Uid:6da5a353-0459-4899-8898-8a79910e38eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.149353 kubelet[3388]: E0124 00:49:32.149315 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.149452 kubelet[3388]: E0124 00:49:32.149414 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7pw49" Jan 24 00:49:32.149575 kubelet[3388]: E0124 00:49:32.149552 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-7pw49" Jan 24 00:49:32.149963 kubelet[3388]: E0124 00:49:32.149916 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-7pw49_calico-system(6da5a353-0459-4899-8898-8a79910e38eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-7pw49_calico-system(6da5a353-0459-4899-8898-8a79910e38eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:49:32.152346 containerd[1821]: time="2026-01-24T00:49:32.152312445Z" level=error msg="Failed to destroy network for sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.154082 containerd[1821]: time="2026-01-24T00:49:32.154047563Z" level=error msg="encountered an error cleaning up failed sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.154269 containerd[1821]: time="2026-01-24T00:49:32.154105864Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7bd74fcf67-tzlpc,Uid:fceb2be0-173e-4232-b91f-73ea1995bd45,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.156215 kubelet[3388]: E0124 00:49:32.154543 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.156215 kubelet[3388]: E0124 00:49:32.155730 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bd74fcf67-tzlpc" Jan 24 00:49:32.156215 kubelet[3388]: E0124 00:49:32.155776 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7bd74fcf67-tzlpc" Jan 24 00:49:32.156453 kubelet[3388]: E0124 00:49:32.155862 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7bd74fcf67-tzlpc_calico-system(fceb2be0-173e-4232-b91f-73ea1995bd45)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7bd74fcf67-tzlpc_calico-system(fceb2be0-173e-4232-b91f-73ea1995bd45)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bd74fcf67-tzlpc" podUID="fceb2be0-173e-4232-b91f-73ea1995bd45" Jan 24 00:49:32.234911 containerd[1821]: time="2026-01-24T00:49:32.234860006Z" level=error msg="Failed to destroy network for sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.235960 containerd[1821]: time="2026-01-24T00:49:32.235917117Z" level=error msg="Failed to destroy network for sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.236935 containerd[1821]: time="2026-01-24T00:49:32.235706114Z" level=error msg="encountered an error cleaning up failed sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.238265 containerd[1821]: time="2026-01-24T00:49:32.237215730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cdd95,Uid:7b5279fd-856f-4138-b3c1-0703370fedaa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.239269 containerd[1821]: time="2026-01-24T00:49:32.238924248Z" level=error msg="encountered an error cleaning up failed sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.239269 containerd[1821]: time="2026-01-24T00:49:32.238976348Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87ffcbb4-hrz4c,Uid:ce17dab6-c6ae-4d47-91e5-8ead47b1af74,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.241812 kubelet[3388]: E0124 00:49:32.240939 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.241812 kubelet[3388]: E0124 00:49:32.241003 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cdd95" Jan 24 00:49:32.241812 kubelet[3388]: E0124 00:49:32.241032 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-cdd95" Jan 24 00:49:32.242002 kubelet[3388]: E0124 00:49:32.241081 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-cdd95_kube-system(7b5279fd-856f-4138-b3c1-0703370fedaa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-cdd95_kube-system(7b5279fd-856f-4138-b3c1-0703370fedaa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cdd95" podUID="7b5279fd-856f-4138-b3c1-0703370fedaa" Jan 24 00:49:32.242457 kubelet[3388]: E0124 00:49:32.242275 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.242457 kubelet[3388]: E0124 00:49:32.242332 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" Jan 24 00:49:32.242457 kubelet[3388]: E0124 00:49:32.242362 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" Jan 24 00:49:32.242633 kubelet[3388]: E0124 00:49:32.242406 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87ffcbb4-hrz4c_calico-apiserver(ce17dab6-c6ae-4d47-91e5-8ead47b1af74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87ffcbb4-hrz4c_calico-apiserver(ce17dab6-c6ae-4d47-91e5-8ead47b1af74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:49:32.246160 containerd[1821]: time="2026-01-24T00:49:32.245864120Z" level=error msg="Failed to destroy network for sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.246374 containerd[1821]: time="2026-01-24T00:49:32.246342625Z" level=error msg="encountered an error cleaning up failed sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.246532 containerd[1821]: time="2026-01-24T00:49:32.246505427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87ffcbb4-d45k5,Uid:feb072f7-3316-4b11-9780-0976f355dc5e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.247103 kubelet[3388]: E0124 00:49:32.246995 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.247103 kubelet[3388]: E0124 00:49:32.247079 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" Jan 24 00:49:32.247391 kubelet[3388]: E0124 00:49:32.247264 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" Jan 24 00:49:32.247391 kubelet[3388]: E0124 00:49:32.247346 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d87ffcbb4-d45k5_calico-apiserver(feb072f7-3316-4b11-9780-0976f355dc5e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d87ffcbb4-d45k5_calico-apiserver(feb072f7-3316-4b11-9780-0976f355dc5e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:49:32.258050 containerd[1821]: time="2026-01-24T00:49:32.257929946Z" level=error msg="Failed to destroy network for sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.259616 containerd[1821]: time="2026-01-24T00:49:32.259582463Z" level=error msg="encountered an error cleaning up failed sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.259711 containerd[1821]: time="2026-01-24T00:49:32.259642764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5bf95d6-rk94n,Uid:37cb1a00-f2a5-4886-98cf-7e9aeba0026f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.259941 kubelet[3388]: E0124 00:49:32.259820 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.259941 kubelet[3388]: E0124 00:49:32.259869 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" Jan 24 00:49:32.259941 kubelet[3388]: E0124 00:49:32.259895 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" Jan 24 00:49:32.260773 kubelet[3388]: E0124 00:49:32.259939 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c5bf95d6-rk94n_calico-system(37cb1a00-f2a5-4886-98cf-7e9aeba0026f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c5bf95d6-rk94n_calico-system(37cb1a00-f2a5-4886-98cf-7e9aeba0026f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:49:32.287745 containerd[1821]: time="2026-01-24T00:49:32.287696956Z" level=error msg="Failed to destroy network for sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.288059 containerd[1821]: time="2026-01-24T00:49:32.288027860Z" level=error msg="encountered an error cleaning up failed sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.288171 containerd[1821]: time="2026-01-24T00:49:32.288085460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rznsh,Uid:ff942fbf-7f43-4ef8-9f92-2e10cfb795ba,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.288340 kubelet[3388]: E0124 00:49:32.288301 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.288429 kubelet[3388]: E0124 00:49:32.288363 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rznsh" Jan 24 00:49:32.288429 kubelet[3388]: E0124 00:49:32.288386 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-rznsh" Jan 24 00:49:32.288523 kubelet[3388]: E0124 00:49:32.288452 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-rznsh_kube-system(ff942fbf-7f43-4ef8-9f92-2e10cfb795ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-rznsh_kube-system(ff942fbf-7f43-4ef8-9f92-2e10cfb795ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rznsh" podUID="ff942fbf-7f43-4ef8-9f92-2e10cfb795ba" Jan 24 00:49:32.301695 containerd[1821]: time="2026-01-24T00:49:32.301279298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w66c2,Uid:52d6adb2-e5fc-4ea6-8c92-021d49b0142f,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:32.372669 containerd[1821]: time="2026-01-24T00:49:32.372606041Z" level=error msg="Failed to destroy network for sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.373019 containerd[1821]: time="2026-01-24T00:49:32.372986045Z" level=error msg="encountered an error cleaning up failed sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.373118 containerd[1821]: time="2026-01-24T00:49:32.373040646Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w66c2,Uid:52d6adb2-e5fc-4ea6-8c92-021d49b0142f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.373387 kubelet[3388]: E0124 00:49:32.373334 3388 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.373489 kubelet[3388]: E0124 00:49:32.373416 3388 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w66c2" Jan 24 00:49:32.373489 kubelet[3388]: E0124 00:49:32.373443 3388 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w66c2" Jan 24 00:49:32.373573 kubelet[3388]: E0124 00:49:32.373500 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w66c2_calico-system(52d6adb2-e5fc-4ea6-8c92-021d49b0142f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w66c2_calico-system(52d6adb2-e5fc-4ea6-8c92-021d49b0142f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:49:32.444905 kubelet[3388]: I0124 00:49:32.444744 3388 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:32.447073 containerd[1821]: time="2026-01-24T00:49:32.447020117Z" level=info msg="StopPodSandbox for \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\"" Jan 24 00:49:32.447288 containerd[1821]: time="2026-01-24T00:49:32.447254120Z" level=info msg="Ensure that sandbox ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39 in task-service has been cleanup successfully" Jan 24 00:49:32.453457 kubelet[3388]: I0124 00:49:32.453182 3388 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:32.453753 containerd[1821]: time="2026-01-24T00:49:32.453713487Z" level=info msg="StopPodSandbox for \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\"" Jan 24 00:49:32.454300 containerd[1821]: time="2026-01-24T00:49:32.454047890Z" level=info msg="Ensure that sandbox 77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9 in task-service has been cleanup successfully" Jan 24 00:49:32.455620 containerd[1821]: time="2026-01-24T00:49:32.455589706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:49:32.456534 kubelet[3388]: I0124 00:49:32.456506 3388 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:49:32.458351 containerd[1821]: time="2026-01-24T00:49:32.458326335Z" level=info msg="StopPodSandbox for \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\"" Jan 24 00:49:32.458590 containerd[1821]: time="2026-01-24T00:49:32.458556437Z" level=info msg="Ensure that sandbox dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9 in task-service has been cleanup successfully" Jan 24 00:49:32.462209 kubelet[3388]: I0124 00:49:32.461816 3388 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:32.462456 containerd[1821]: time="2026-01-24T00:49:32.462433378Z" level=info msg="StopPodSandbox for \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\"" Jan 24 00:49:32.463182 containerd[1821]: time="2026-01-24T00:49:32.463158685Z" level=info msg="Ensure that sandbox 75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021 in task-service has been cleanup successfully" Jan 24 00:49:32.467521 kubelet[3388]: I0124 00:49:32.467494 3388 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:32.470044 containerd[1821]: time="2026-01-24T00:49:32.470018057Z" level=info msg="StopPodSandbox for \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\"" Jan 24 00:49:32.470336 containerd[1821]: time="2026-01-24T00:49:32.470313560Z" level=info msg="Ensure that sandbox 8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e in task-service has been cleanup successfully" Jan 24 00:49:32.474539 kubelet[3388]: I0124 00:49:32.474117 3388 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:32.475593 containerd[1821]: time="2026-01-24T00:49:32.475571315Z" level=info msg="StopPodSandbox for \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\"" Jan 24 00:49:32.475859 containerd[1821]: time="2026-01-24T00:49:32.475839418Z" level=info msg="Ensure that sandbox 9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83 in task-service has been cleanup successfully" Jan 24 00:49:32.495202 kubelet[3388]: I0124 00:49:32.495173 3388 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:32.499700 containerd[1821]: time="2026-01-24T00:49:32.499667966Z" level=info msg="StopPodSandbox for \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\"" Jan 24 00:49:32.500268 containerd[1821]: time="2026-01-24T00:49:32.499976669Z" level=info msg="Ensure that sandbox 260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b in task-service has been cleanup successfully" Jan 24 00:49:32.507245 kubelet[3388]: I0124 00:49:32.507220 3388 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:49:32.508282 containerd[1821]: time="2026-01-24T00:49:32.507874652Z" level=info msg="StopPodSandbox for \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\"" Jan 24 00:49:32.508282 containerd[1821]: time="2026-01-24T00:49:32.508065654Z" level=info msg="Ensure that sandbox 9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06 in task-service has been cleanup successfully" Jan 24 00:49:32.604714 containerd[1821]: time="2026-01-24T00:49:32.604638060Z" level=error msg="StopPodSandbox for \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\" failed" error="failed to destroy network for sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.606212 kubelet[3388]: E0124 00:49:32.604928 3388 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:32.606212 kubelet[3388]: E0124 00:49:32.605003 3388 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021"} Jan 24 00:49:32.606212 kubelet[3388]: E0124 00:49:32.605087 3388 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37cb1a00-f2a5-4886-98cf-7e9aeba0026f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:32.606212 kubelet[3388]: E0124 00:49:32.605124 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37cb1a00-f2a5-4886-98cf-7e9aeba0026f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:49:32.665281 containerd[1821]: time="2026-01-24T00:49:32.665128291Z" level=error msg="StopPodSandbox for \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\" failed" error="failed to destroy network for sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.665851 kubelet[3388]: E0124 00:49:32.665592 3388 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:32.665851 kubelet[3388]: E0124 00:49:32.665661 3388 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39"} Jan 24 00:49:32.665851 kubelet[3388]: E0124 00:49:32.665711 3388 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fceb2be0-173e-4232-b91f-73ea1995bd45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:32.665851 kubelet[3388]: E0124 00:49:32.665741 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fceb2be0-173e-4232-b91f-73ea1995bd45\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7bd74fcf67-tzlpc" podUID="fceb2be0-173e-4232-b91f-73ea1995bd45" Jan 24 00:49:32.667391 containerd[1821]: time="2026-01-24T00:49:32.667353514Z" level=error msg="StopPodSandbox for \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\" failed" error="failed to destroy network for sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.667702 kubelet[3388]: E0124 00:49:32.667671 3388 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:32.667864 kubelet[3388]: E0124 00:49:32.667847 3388 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e"} Jan 24 00:49:32.667990 kubelet[3388]: E0124 00:49:32.667973 3388 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b5279fd-856f-4138-b3c1-0703370fedaa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:32.668159 kubelet[3388]: E0124 00:49:32.668126 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b5279fd-856f-4138-b3c1-0703370fedaa\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-cdd95" podUID="7b5279fd-856f-4138-b3c1-0703370fedaa" Jan 24 00:49:32.669869 containerd[1821]: time="2026-01-24T00:49:32.669680638Z" level=error msg="StopPodSandbox for \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\" failed" error="failed to destroy network for sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.670248 kubelet[3388]: E0124 00:49:32.670111 3388 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:32.670248 kubelet[3388]: E0124 00:49:32.670149 3388 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9"} Jan 24 00:49:32.670248 kubelet[3388]: E0124 00:49:32.670183 3388 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"feb072f7-3316-4b11-9780-0976f355dc5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:32.670248 kubelet[3388]: E0124 00:49:32.670212 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"feb072f7-3316-4b11-9780-0976f355dc5e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:49:32.681243 containerd[1821]: time="2026-01-24T00:49:32.680884955Z" level=error msg="StopPodSandbox for \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\" failed" error="failed to destroy network for sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.681345 kubelet[3388]: E0124 00:49:32.681108 3388 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:49:32.681345 kubelet[3388]: E0124 00:49:32.681151 3388 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9"} Jan 24 00:49:32.681345 kubelet[3388]: E0124 00:49:32.681183 3388 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ff942fbf-7f43-4ef8-9f92-2e10cfb795ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:32.681345 kubelet[3388]: E0124 00:49:32.681209 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ff942fbf-7f43-4ef8-9f92-2e10cfb795ba\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-rznsh" podUID="ff942fbf-7f43-4ef8-9f92-2e10cfb795ba" Jan 24 00:49:32.696471 containerd[1821]: time="2026-01-24T00:49:32.695750510Z" level=error msg="StopPodSandbox for \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\" failed" error="failed to destroy network for sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.696576 kubelet[3388]: E0124 00:49:32.696142 3388 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:49:32.696576 kubelet[3388]: E0124 00:49:32.696190 3388 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06"} Jan 24 00:49:32.696576 kubelet[3388]: E0124 00:49:32.696231 3388 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6da5a353-0459-4899-8898-8a79910e38eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:32.696576 kubelet[3388]: E0124 00:49:32.696260 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6da5a353-0459-4899-8898-8a79910e38eb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:49:32.697239 containerd[1821]: time="2026-01-24T00:49:32.697131424Z" level=error msg="StopPodSandbox for \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\" failed" error="failed to destroy network for sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.697587 kubelet[3388]: E0124 00:49:32.697459 3388 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:32.697587 kubelet[3388]: E0124 00:49:32.697497 3388 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83"} Jan 24 00:49:32.697587 kubelet[3388]: E0124 00:49:32.697541 3388 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce17dab6-c6ae-4d47-91e5-8ead47b1af74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:32.697833 kubelet[3388]: E0124 00:49:32.697569 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce17dab6-c6ae-4d47-91e5-8ead47b1af74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:49:32.698307 containerd[1821]: time="2026-01-24T00:49:32.698257936Z" level=error msg="StopPodSandbox for \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\" failed" error="failed to destroy network for sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:49:32.698671 kubelet[3388]: E0124 00:49:32.698543 3388 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:32.698671 kubelet[3388]: E0124 00:49:32.698582 3388 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b"} Jan 24 00:49:32.698671 kubelet[3388]: E0124 00:49:32.698613 3388 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52d6adb2-e5fc-4ea6-8c92-021d49b0142f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:49:32.698671 kubelet[3388]: E0124 00:49:32.698638 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52d6adb2-e5fc-4ea6-8c92-021d49b0142f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:49:38.543078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985934359.mount: Deactivated successfully. Jan 24 00:49:38.597048 containerd[1821]: time="2026-01-24T00:49:38.596989669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:38.601349 containerd[1821]: time="2026-01-24T00:49:38.601181212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:49:38.607874 containerd[1821]: time="2026-01-24T00:49:38.606635569Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:38.615784 containerd[1821]: time="2026-01-24T00:49:38.615401760Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.159769453s" Jan 24 00:49:38.615784 containerd[1821]: time="2026-01-24T00:49:38.615449160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:49:38.615784 containerd[1821]: time="2026-01-24T00:49:38.615565061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:49:38.634812 containerd[1821]: time="2026-01-24T00:49:38.633908752Z" level=info msg="CreateContainer within sandbox \"fdc8552aacbce4d9304a5b4f42bd7cbf06cb952447400adf0a609404361a7641\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:49:38.675866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount104075861.mount: Deactivated successfully. Jan 24 00:49:38.689932 containerd[1821]: time="2026-01-24T00:49:38.689301526Z" level=info msg="CreateContainer within sandbox \"fdc8552aacbce4d9304a5b4f42bd7cbf06cb952447400adf0a609404361a7641\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f8d0e2c7c1c055a1a83c2d9d60485a1223dfec32d568d840eb33028bc891f83f\"" Jan 24 00:49:38.690386 containerd[1821]: time="2026-01-24T00:49:38.690357037Z" level=info msg="StartContainer for \"f8d0e2c7c1c055a1a83c2d9d60485a1223dfec32d568d840eb33028bc891f83f\"" Jan 24 00:49:38.760347 containerd[1821]: time="2026-01-24T00:49:38.760286662Z" level=info msg="StartContainer for \"f8d0e2c7c1c055a1a83c2d9d60485a1223dfec32d568d840eb33028bc891f83f\" returns successfully" Jan 24 00:49:38.868084 kubelet[3388]: I0124 00:49:38.867946 3388 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:49:39.087958 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:49:39.088104 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:49:39.202095 containerd[1821]: time="2026-01-24T00:49:39.201399735Z" level=info msg="StopPodSandbox for \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\"" Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.293 [INFO][4536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.295 [INFO][4536] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" iface="eth0" netns="/var/run/netns/cni-13a5f169-c8a6-03f7-d36b-81be751eb2e6" Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.295 [INFO][4536] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" iface="eth0" netns="/var/run/netns/cni-13a5f169-c8a6-03f7-d36b-81be751eb2e6" Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.296 [INFO][4536] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" iface="eth0" netns="/var/run/netns/cni-13a5f169-c8a6-03f7-d36b-81be751eb2e6" Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.296 [INFO][4536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.296 [INFO][4536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.335 [INFO][4543] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" HandleID="k8s-pod-network.ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--7bd74fcf67--tzlpc-eth0" Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.335 [INFO][4543] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.335 [INFO][4543] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.342 [WARNING][4543] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" HandleID="k8s-pod-network.ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--7bd74fcf67--tzlpc-eth0" Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.342 [INFO][4543] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" HandleID="k8s-pod-network.ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--7bd74fcf67--tzlpc-eth0" Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.343 [INFO][4543] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:39.349786 containerd[1821]: 2026-01-24 00:49:39.347 [INFO][4536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:39.351218 containerd[1821]: time="2026-01-24T00:49:39.349951775Z" level=info msg="TearDown network for sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\" successfully" Jan 24 00:49:39.351218 containerd[1821]: time="2026-01-24T00:49:39.349984675Z" level=info msg="StopPodSandbox for \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\" returns successfully" Jan 24 00:49:39.432515 kubelet[3388]: I0124 00:49:39.432473 3388 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bjrln\" (UniqueName: \"kubernetes.io/projected/fceb2be0-173e-4232-b91f-73ea1995bd45-kube-api-access-bjrln\") pod \"fceb2be0-173e-4232-b91f-73ea1995bd45\" (UID: \"fceb2be0-173e-4232-b91f-73ea1995bd45\") " Jan 24 00:49:39.433585 kubelet[3388]: I0124 00:49:39.432531 3388 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fceb2be0-173e-4232-b91f-73ea1995bd45-whisker-backend-key-pair\") pod \"fceb2be0-173e-4232-b91f-73ea1995bd45\" (UID: \"fceb2be0-173e-4232-b91f-73ea1995bd45\") " Jan 24 00:49:39.434785 kubelet[3388]: I0124 00:49:39.433686 3388 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fceb2be0-173e-4232-b91f-73ea1995bd45-whisker-ca-bundle\") pod \"fceb2be0-173e-4232-b91f-73ea1995bd45\" (UID: \"fceb2be0-173e-4232-b91f-73ea1995bd45\") " Jan 24 00:49:39.435086 kubelet[3388]: I0124 00:49:39.435026 3388 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fceb2be0-173e-4232-b91f-73ea1995bd45-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fceb2be0-173e-4232-b91f-73ea1995bd45" (UID: "fceb2be0-173e-4232-b91f-73ea1995bd45"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:49:39.439397 kubelet[3388]: I0124 00:49:39.439224 3388 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fceb2be0-173e-4232-b91f-73ea1995bd45-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fceb2be0-173e-4232-b91f-73ea1995bd45" (UID: "fceb2be0-173e-4232-b91f-73ea1995bd45"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:49:39.439397 kubelet[3388]: I0124 00:49:39.439362 3388 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fceb2be0-173e-4232-b91f-73ea1995bd45-kube-api-access-bjrln" (OuterVolumeSpecName: "kube-api-access-bjrln") pod "fceb2be0-173e-4232-b91f-73ea1995bd45" (UID: "fceb2be0-173e-4232-b91f-73ea1995bd45"). InnerVolumeSpecName "kube-api-access-bjrln". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:49:39.534420 kubelet[3388]: I0124 00:49:39.534347 3388 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fceb2be0-173e-4232-b91f-73ea1995bd45-whisker-ca-bundle\") on node \"ci-4081.3.6-n-d923855e69\" DevicePath \"\"" Jan 24 00:49:39.534420 kubelet[3388]: I0124 00:49:39.534379 3388 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bjrln\" (UniqueName: \"kubernetes.io/projected/fceb2be0-173e-4232-b91f-73ea1995bd45-kube-api-access-bjrln\") on node \"ci-4081.3.6-n-d923855e69\" DevicePath \"\"" Jan 24 00:49:39.534420 kubelet[3388]: I0124 00:49:39.534394 3388 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fceb2be0-173e-4232-b91f-73ea1995bd45-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-d923855e69\" DevicePath \"\"" Jan 24 00:49:39.544597 systemd[1]: run-netns-cni\x2d13a5f169\x2dc8a6\x2d03f7\x2dd36b\x2d81be751eb2e6.mount: Deactivated successfully. Jan 24 00:49:39.547886 systemd[1]: var-lib-kubelet-pods-fceb2be0\x2d173e\x2d4232\x2db91f\x2d73ea1995bd45-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbjrln.mount: Deactivated successfully. Jan 24 00:49:39.548059 systemd[1]: var-lib-kubelet-pods-fceb2be0\x2d173e\x2d4232\x2db91f\x2d73ea1995bd45-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:49:39.579809 kubelet[3388]: I0124 00:49:39.578575 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-229md" podStartSLOduration=3.17664768 podStartE2EDuration="21.578551145s" podCreationTimestamp="2026-01-24 00:49:18 +0000 UTC" firstStartedPulling="2026-01-24 00:49:20.214758408 +0000 UTC m=+22.014351625" lastFinishedPulling="2026-01-24 00:49:38.616661873 +0000 UTC m=+40.416255090" observedRunningTime="2026-01-24 00:49:39.578354743 +0000 UTC m=+41.377948060" watchObservedRunningTime="2026-01-24 00:49:39.578551145 +0000 UTC m=+41.378144462" Jan 24 00:49:39.635156 kubelet[3388]: I0124 00:49:39.635113 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cdf55fad-ab11-450d-ad4f-7c531f40d0f4-whisker-backend-key-pair\") pod \"whisker-55c747bbf5-8vn48\" (UID: \"cdf55fad-ab11-450d-ad4f-7c531f40d0f4\") " pod="calico-system/whisker-55c747bbf5-8vn48" Jan 24 00:49:39.638006 kubelet[3388]: I0124 00:49:39.637976 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cdf55fad-ab11-450d-ad4f-7c531f40d0f4-whisker-ca-bundle\") pod \"whisker-55c747bbf5-8vn48\" (UID: \"cdf55fad-ab11-450d-ad4f-7c531f40d0f4\") " pod="calico-system/whisker-55c747bbf5-8vn48" Jan 24 00:49:39.640883 kubelet[3388]: I0124 00:49:39.638141 3388 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4px27\" (UniqueName: \"kubernetes.io/projected/cdf55fad-ab11-450d-ad4f-7c531f40d0f4-kube-api-access-4px27\") pod \"whisker-55c747bbf5-8vn48\" (UID: \"cdf55fad-ab11-450d-ad4f-7c531f40d0f4\") " pod="calico-system/whisker-55c747bbf5-8vn48" Jan 24 00:49:39.916055 containerd[1821]: time="2026-01-24T00:49:39.915935542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55c747bbf5-8vn48,Uid:cdf55fad-ab11-450d-ad4f-7c531f40d0f4,Namespace:calico-system,Attempt:0,}" Jan 24 00:49:40.084831 systemd-networkd[1395]: calib72b38e19ea: Link UP Jan 24 00:49:40.086457 systemd-networkd[1395]: calib72b38e19ea: Gained carrier Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:39.987 [INFO][4564] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:39.996 [INFO][4564] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0 whisker-55c747bbf5- calico-system cdf55fad-ab11-450d-ad4f-7c531f40d0f4 919 0 2026-01-24 00:49:39 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:55c747bbf5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-d923855e69 whisker-55c747bbf5-8vn48 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib72b38e19ea [] [] }} ContainerID="b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" Namespace="calico-system" Pod="whisker-55c747bbf5-8vn48" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:39.996 [INFO][4564] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" Namespace="calico-system" Pod="whisker-55c747bbf5-8vn48" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.021 [INFO][4577] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" HandleID="k8s-pod-network.b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.022 [INFO][4577] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" HandleID="k8s-pod-network.b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d923855e69", "pod":"whisker-55c747bbf5-8vn48", "timestamp":"2026-01-24 00:49:40.02185864 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d923855e69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.022 [INFO][4577] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.022 [INFO][4577] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.022 [INFO][4577] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d923855e69' Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.027 [INFO][4577] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.032 [INFO][4577] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.037 [INFO][4577] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.038 [INFO][4577] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.040 [INFO][4577] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.040 [INFO][4577] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.041 [INFO][4577] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8 Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.047 [INFO][4577] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.057 [INFO][4577] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.65/26] block=192.168.88.64/26 handle="k8s-pod-network.b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.057 [INFO][4577] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.65/26] handle="k8s-pod-network.b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.057 [INFO][4577] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:40.107137 containerd[1821]: 2026-01-24 00:49:40.058 [INFO][4577] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.65/26] IPv6=[] ContainerID="b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" HandleID="k8s-pod-network.b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0" Jan 24 00:49:40.109020 containerd[1821]: 2026-01-24 00:49:40.059 [INFO][4564] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" Namespace="calico-system" Pod="whisker-55c747bbf5-8vn48" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0", GenerateName:"whisker-55c747bbf5-", Namespace:"calico-system", SelfLink:"", UID:"cdf55fad-ab11-450d-ad4f-7c531f40d0f4", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55c747bbf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"", Pod:"whisker-55c747bbf5-8vn48", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib72b38e19ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:40.109020 containerd[1821]: 2026-01-24 00:49:40.059 [INFO][4564] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.65/32] ContainerID="b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" Namespace="calico-system" Pod="whisker-55c747bbf5-8vn48" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0" Jan 24 00:49:40.109020 containerd[1821]: 2026-01-24 00:49:40.059 [INFO][4564] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib72b38e19ea ContainerID="b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" Namespace="calico-system" Pod="whisker-55c747bbf5-8vn48" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0" Jan 24 00:49:40.109020 containerd[1821]: 2026-01-24 00:49:40.086 [INFO][4564] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" Namespace="calico-system" Pod="whisker-55c747bbf5-8vn48" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0" Jan 24 00:49:40.109020 containerd[1821]: 2026-01-24 00:49:40.087 [INFO][4564] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" Namespace="calico-system" Pod="whisker-55c747bbf5-8vn48" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0", GenerateName:"whisker-55c747bbf5-", Namespace:"calico-system", SelfLink:"", UID:"cdf55fad-ab11-450d-ad4f-7c531f40d0f4", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"55c747bbf5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8", Pod:"whisker-55c747bbf5-8vn48", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib72b38e19ea", MAC:"4e:82:ea:1e:5f:69", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:40.109020 containerd[1821]: 2026-01-24 00:49:40.105 [INFO][4564] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8" Namespace="calico-system" Pod="whisker-55c747bbf5-8vn48" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-whisker--55c747bbf5--8vn48-eth0" Jan 24 00:49:40.129693 containerd[1821]: time="2026-01-24T00:49:40.129277354Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:40.129693 containerd[1821]: time="2026-01-24T00:49:40.129337455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:40.129693 containerd[1821]: time="2026-01-24T00:49:40.129353855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:40.129693 containerd[1821]: time="2026-01-24T00:49:40.129448956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:40.185638 containerd[1821]: time="2026-01-24T00:49:40.185474837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55c747bbf5-8vn48,Uid:cdf55fad-ab11-450d-ad4f-7c531f40d0f4,Namespace:calico-system,Attempt:0,} returns sandbox id \"b5d9d50f3b6144a440697df5636dde230783210dcd4b38915104c07fc7f005a8\"" Jan 24 00:49:40.188128 containerd[1821]: time="2026-01-24T00:49:40.188007563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:49:40.299924 kubelet[3388]: I0124 00:49:40.299882 3388 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fceb2be0-173e-4232-b91f-73ea1995bd45" path="/var/lib/kubelet/pods/fceb2be0-173e-4232-b91f-73ea1995bd45/volumes" Jan 24 00:49:40.449630 containerd[1821]: time="2026-01-24T00:49:40.449466573Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:40.453245 containerd[1821]: time="2026-01-24T00:49:40.453139411Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:49:40.453245 containerd[1821]: time="2026-01-24T00:49:40.453187612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:49:40.453592 kubelet[3388]: E0124 00:49:40.453547 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:49:40.453687 kubelet[3388]: E0124 00:49:40.453612 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:49:40.454106 kubelet[3388]: E0124 00:49:40.453829 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5d4d61cbab0549149b9649b46b6d3269,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4px27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55c747bbf5-8vn48_calico-system(cdf55fad-ab11-450d-ad4f-7c531f40d0f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:40.456077 containerd[1821]: time="2026-01-24T00:49:40.456046442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:49:40.728600 containerd[1821]: time="2026-01-24T00:49:40.728413465Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:40.731710 containerd[1821]: time="2026-01-24T00:49:40.731553898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:49:40.731710 containerd[1821]: time="2026-01-24T00:49:40.731654899Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:49:40.732962 kubelet[3388]: E0124 00:49:40.732106 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:49:40.732962 kubelet[3388]: E0124 00:49:40.732176 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:49:40.733138 kubelet[3388]: E0124 00:49:40.732335 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4px27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55c747bbf5-8vn48_calico-system(cdf55fad-ab11-450d-ad4f-7c531f40d0f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:40.733682 kubelet[3388]: E0124 00:49:40.733602 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:49:40.892969 kernel: bpftool[4753]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:49:41.167074 systemd-networkd[1395]: vxlan.calico: Link UP Jan 24 00:49:41.167084 systemd-networkd[1395]: vxlan.calico: Gained carrier Jan 24 00:49:41.535929 kubelet[3388]: E0124 00:49:41.535544 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:49:41.986054 systemd-networkd[1395]: calib72b38e19ea: Gained IPv6LL Jan 24 00:49:43.074328 systemd-networkd[1395]: vxlan.calico: Gained IPv6LL Jan 24 00:49:44.300576 containerd[1821]: time="2026-01-24T00:49:44.300436904Z" level=info msg="StopPodSandbox for \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\"" Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.348 [INFO][4840] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.348 [INFO][4840] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" iface="eth0" netns="/var/run/netns/cni-6efca7c2-076c-f8fc-1e5a-6d5007ee5501" Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.349 [INFO][4840] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" iface="eth0" netns="/var/run/netns/cni-6efca7c2-076c-f8fc-1e5a-6d5007ee5501" Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.350 [INFO][4840] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" iface="eth0" netns="/var/run/netns/cni-6efca7c2-076c-f8fc-1e5a-6d5007ee5501" Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.350 [INFO][4840] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.350 [INFO][4840] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.373 [INFO][4847] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" HandleID="k8s-pod-network.9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.374 [INFO][4847] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.374 [INFO][4847] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.379 [WARNING][4847] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" HandleID="k8s-pod-network.9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.379 [INFO][4847] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" HandleID="k8s-pod-network.9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.380 [INFO][4847] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:44.382855 containerd[1821]: 2026-01-24 00:49:44.381 [INFO][4840] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:44.385881 containerd[1821]: time="2026-01-24T00:49:44.382996137Z" level=info msg="TearDown network for sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\" successfully" Jan 24 00:49:44.385881 containerd[1821]: time="2026-01-24T00:49:44.383027137Z" level=info msg="StopPodSandbox for \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\" returns successfully" Jan 24 00:49:44.385881 containerd[1821]: time="2026-01-24T00:49:44.385383861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87ffcbb4-hrz4c,Uid:ce17dab6-c6ae-4d47-91e5-8ead47b1af74,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:49:44.388736 systemd[1]: run-netns-cni\x2d6efca7c2\x2d076c\x2df8fc\x2d1e5a\x2d6d5007ee5501.mount: Deactivated successfully. Jan 24 00:49:44.602228 systemd-networkd[1395]: cali258bfe9ee26: Link UP Jan 24 00:49:44.602705 systemd-networkd[1395]: cali258bfe9ee26: Gained carrier Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.533 [INFO][4858] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0 calico-apiserver-7d87ffcbb4- calico-apiserver ce17dab6-c6ae-4d47-91e5-8ead47b1af74 946 0 2026-01-24 00:49:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d87ffcbb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-d923855e69 calico-apiserver-7d87ffcbb4-hrz4c eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali258bfe9ee26 [] [] }} ContainerID="ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-hrz4c" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.534 [INFO][4858] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-hrz4c" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.563 [INFO][4867] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" HandleID="k8s-pod-network.ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.563 [INFO][4867] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" HandleID="k8s-pod-network.ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cf090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-d923855e69", "pod":"calico-apiserver-7d87ffcbb4-hrz4c", "timestamp":"2026-01-24 00:49:44.563040655 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d923855e69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.563 [INFO][4867] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.563 [INFO][4867] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.563 [INFO][4867] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d923855e69' Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.569 [INFO][4867] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.573 [INFO][4867] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.577 [INFO][4867] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.578 [INFO][4867] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.580 [INFO][4867] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.580 [INFO][4867] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.581 [INFO][4867] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.590 [INFO][4867] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.595 [INFO][4867] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.66/26] block=192.168.88.64/26 handle="k8s-pod-network.ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.595 [INFO][4867] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.66/26] handle="k8s-pod-network.ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.595 [INFO][4867] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:44.623862 containerd[1821]: 2026-01-24 00:49:44.596 [INFO][4867] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.66/26] IPv6=[] ContainerID="ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" HandleID="k8s-pod-network.ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:44.624503 containerd[1821]: 2026-01-24 00:49:44.597 [INFO][4858] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-hrz4c" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0", GenerateName:"calico-apiserver-7d87ffcbb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce17dab6-c6ae-4d47-91e5-8ead47b1af74", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87ffcbb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"", Pod:"calico-apiserver-7d87ffcbb4-hrz4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali258bfe9ee26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:44.624503 containerd[1821]: 2026-01-24 00:49:44.598 [INFO][4858] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.66/32] ContainerID="ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-hrz4c" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:44.624503 containerd[1821]: 2026-01-24 00:49:44.598 [INFO][4858] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali258bfe9ee26 ContainerID="ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-hrz4c" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:44.624503 containerd[1821]: 2026-01-24 00:49:44.604 [INFO][4858] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-hrz4c" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:44.624503 containerd[1821]: 2026-01-24 00:49:44.604 [INFO][4858] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-hrz4c" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0", GenerateName:"calico-apiserver-7d87ffcbb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce17dab6-c6ae-4d47-91e5-8ead47b1af74", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87ffcbb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a", Pod:"calico-apiserver-7d87ffcbb4-hrz4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali258bfe9ee26", MAC:"46:44:71:27:8a:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:44.624503 containerd[1821]: 2026-01-24 00:49:44.619 [INFO][4858] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-hrz4c" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:44.652738 containerd[1821]: time="2026-01-24T00:49:44.652644359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:44.653630 containerd[1821]: time="2026-01-24T00:49:44.652788761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:44.653630 containerd[1821]: time="2026-01-24T00:49:44.652807261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:44.653630 containerd[1821]: time="2026-01-24T00:49:44.652894462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:44.749979 containerd[1821]: time="2026-01-24T00:49:44.749907541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87ffcbb4-hrz4c,Uid:ce17dab6-c6ae-4d47-91e5-8ead47b1af74,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a\"" Jan 24 00:49:44.752719 containerd[1821]: time="2026-01-24T00:49:44.752676369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:49:45.023798 containerd[1821]: time="2026-01-24T00:49:45.023433302Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:45.026817 containerd[1821]: time="2026-01-24T00:49:45.026756236Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:49:45.026817 containerd[1821]: time="2026-01-24T00:49:45.026836237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:49:45.027199 kubelet[3388]: E0124 00:49:45.027154 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:49:45.027792 kubelet[3388]: E0124 00:49:45.027214 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:49:45.027792 kubelet[3388]: E0124 00:49:45.027387 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znj2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d87ffcbb4-hrz4c_calico-apiserver(ce17dab6-c6ae-4d47-91e5-8ead47b1af74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:45.029373 kubelet[3388]: E0124 00:49:45.029322 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:49:45.299211 containerd[1821]: time="2026-01-24T00:49:45.298339478Z" level=info msg="StopPodSandbox for \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\"" Jan 24 00:49:45.299211 containerd[1821]: time="2026-01-24T00:49:45.298909883Z" level=info msg="StopPodSandbox for \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\"" Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.373 [INFO][4942] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.373 [INFO][4942] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" iface="eth0" netns="/var/run/netns/cni-6ee2116d-b57c-ed71-30fa-557c2b1e3d95" Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.373 [INFO][4942] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" iface="eth0" netns="/var/run/netns/cni-6ee2116d-b57c-ed71-30fa-557c2b1e3d95" Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.374 [INFO][4942] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" iface="eth0" netns="/var/run/netns/cni-6ee2116d-b57c-ed71-30fa-557c2b1e3d95" Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.375 [INFO][4942] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.375 [INFO][4942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.422 [INFO][4956] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" HandleID="k8s-pod-network.8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.422 [INFO][4956] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.422 [INFO][4956] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.432 [WARNING][4956] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" HandleID="k8s-pod-network.8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.432 [INFO][4956] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" HandleID="k8s-pod-network.8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.437 [INFO][4956] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:45.442640 containerd[1821]: 2026-01-24 00:49:45.440 [INFO][4942] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:45.442640 containerd[1821]: time="2026-01-24T00:49:45.442424932Z" level=info msg="TearDown network for sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\" successfully" Jan 24 00:49:45.445471 containerd[1821]: time="2026-01-24T00:49:45.443809746Z" level=info msg="StopPodSandbox for \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\" returns successfully" Jan 24 00:49:45.448021 systemd[1]: run-netns-cni\x2d6ee2116d\x2db57c\x2ded71\x2d30fa\x2d557c2b1e3d95.mount: Deactivated successfully. Jan 24 00:49:45.451257 containerd[1821]: time="2026-01-24T00:49:45.450483114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cdd95,Uid:7b5279fd-856f-4138-b3c1-0703370fedaa,Namespace:kube-system,Attempt:1,}" Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.399 [INFO][4943] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.400 [INFO][4943] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" iface="eth0" netns="/var/run/netns/cni-a78769ff-ae74-dc4c-b6d4-cbbe7b61a100" Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.400 [INFO][4943] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" iface="eth0" netns="/var/run/netns/cni-a78769ff-ae74-dc4c-b6d4-cbbe7b61a100" Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.400 [INFO][4943] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" iface="eth0" netns="/var/run/netns/cni-a78769ff-ae74-dc4c-b6d4-cbbe7b61a100" Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.401 [INFO][4943] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.401 [INFO][4943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.436 [INFO][4963] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" HandleID="k8s-pod-network.dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.436 [INFO][4963] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.437 [INFO][4963] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.450 [WARNING][4963] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" HandleID="k8s-pod-network.dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.451 [INFO][4963] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" HandleID="k8s-pod-network.dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.453 [INFO][4963] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:45.456124 containerd[1821]: 2026-01-24 00:49:45.454 [INFO][4943] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:49:45.457156 containerd[1821]: time="2026-01-24T00:49:45.456390573Z" level=info msg="TearDown network for sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\" successfully" Jan 24 00:49:45.457156 containerd[1821]: time="2026-01-24T00:49:45.456418974Z" level=info msg="StopPodSandbox for \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\" returns successfully" Jan 24 00:49:45.460108 containerd[1821]: time="2026-01-24T00:49:45.460071310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rznsh,Uid:ff942fbf-7f43-4ef8-9f92-2e10cfb795ba,Namespace:kube-system,Attempt:1,}" Jan 24 00:49:45.461100 systemd[1]: run-netns-cni\x2da78769ff\x2dae74\x2ddc4c\x2db6d4\x2dcbbe7b61a100.mount: Deactivated successfully. Jan 24 00:49:45.551342 kubelet[3388]: E0124 00:49:45.551029 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:49:45.661411 systemd-networkd[1395]: cali2ec281444a0: Link UP Jan 24 00:49:45.665808 systemd-networkd[1395]: cali2ec281444a0: Gained carrier Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.548 [INFO][4971] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0 coredns-668d6bf9bc- kube-system 7b5279fd-856f-4138-b3c1-0703370fedaa 955 0 2026-01-24 00:49:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-d923855e69 coredns-668d6bf9bc-cdd95 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2ec281444a0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" Namespace="kube-system" Pod="coredns-668d6bf9bc-cdd95" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.550 [INFO][4971] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" Namespace="kube-system" Pod="coredns-668d6bf9bc-cdd95" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.612 [INFO][4994] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" HandleID="k8s-pod-network.e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.613 [INFO][4994] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" HandleID="k8s-pod-network.e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fe70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-d923855e69", "pod":"coredns-668d6bf9bc-cdd95", "timestamp":"2026-01-24 00:49:45.612449149 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d923855e69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.614 [INFO][4994] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.614 [INFO][4994] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.614 [INFO][4994] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d923855e69' Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.623 [INFO][4994] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.629 [INFO][4994] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.633 [INFO][4994] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.635 [INFO][4994] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.636 [INFO][4994] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.636 [INFO][4994] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.637 [INFO][4994] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.643 [INFO][4994] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.651 [INFO][4994] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.67/26] block=192.168.88.64/26 handle="k8s-pod-network.e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.651 [INFO][4994] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.67/26] handle="k8s-pod-network.e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.651 [INFO][4994] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:45.688826 containerd[1821]: 2026-01-24 00:49:45.651 [INFO][4994] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.67/26] IPv6=[] ContainerID="e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" HandleID="k8s-pod-network.e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:45.691040 containerd[1821]: 2026-01-24 00:49:45.654 [INFO][4971] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" Namespace="kube-system" Pod="coredns-668d6bf9bc-cdd95" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7b5279fd-856f-4138-b3c1-0703370fedaa", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"", Pod:"coredns-668d6bf9bc-cdd95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ec281444a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:45.691040 containerd[1821]: 2026-01-24 00:49:45.654 [INFO][4971] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.67/32] ContainerID="e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" Namespace="kube-system" Pod="coredns-668d6bf9bc-cdd95" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:45.691040 containerd[1821]: 2026-01-24 00:49:45.654 [INFO][4971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2ec281444a0 ContainerID="e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" Namespace="kube-system" Pod="coredns-668d6bf9bc-cdd95" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:45.691040 containerd[1821]: 2026-01-24 00:49:45.667 [INFO][4971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" Namespace="kube-system" Pod="coredns-668d6bf9bc-cdd95" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:45.691040 containerd[1821]: 2026-01-24 00:49:45.667 [INFO][4971] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" Namespace="kube-system" Pod="coredns-668d6bf9bc-cdd95" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7b5279fd-856f-4138-b3c1-0703370fedaa", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f", Pod:"coredns-668d6bf9bc-cdd95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ec281444a0", MAC:"06:b4:c0:43:db:aa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:45.691040 containerd[1821]: 2026-01-24 00:49:45.685 [INFO][4971] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f" Namespace="kube-system" Pod="coredns-668d6bf9bc-cdd95" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:45.714382 containerd[1821]: time="2026-01-24T00:49:45.713735071Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:45.714619 containerd[1821]: time="2026-01-24T00:49:45.714383278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:45.714619 containerd[1821]: time="2026-01-24T00:49:45.714404878Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:45.714619 containerd[1821]: time="2026-01-24T00:49:45.714494679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:45.772687 systemd-networkd[1395]: cali18bb639d94f: Link UP Jan 24 00:49:45.790409 systemd-networkd[1395]: cali18bb639d94f: Gained carrier Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.586 [INFO][4976] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0 coredns-668d6bf9bc- kube-system ff942fbf-7f43-4ef8-9f92-2e10cfb795ba 956 0 2026-01-24 00:49:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-d923855e69 coredns-668d6bf9bc-rznsh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali18bb639d94f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" Namespace="kube-system" Pod="coredns-668d6bf9bc-rznsh" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.586 [INFO][4976] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" Namespace="kube-system" Pod="coredns-668d6bf9bc-rznsh" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.631 [INFO][5000] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" HandleID="k8s-pod-network.887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.631 [INFO][5000] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" HandleID="k8s-pod-network.887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5090), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-d923855e69", "pod":"coredns-668d6bf9bc-rznsh", "timestamp":"2026-01-24 00:49:45.631527341 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d923855e69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.631 [INFO][5000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.652 [INFO][5000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.652 [INFO][5000] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d923855e69' Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.724 [INFO][5000] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.730 [INFO][5000] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.738 [INFO][5000] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.740 [INFO][5000] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.742 [INFO][5000] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.742 [INFO][5000] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.743 [INFO][5000] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616 Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.752 [INFO][5000] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.763 [INFO][5000] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.68/26] block=192.168.88.64/26 handle="k8s-pod-network.887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.763 [INFO][5000] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.68/26] handle="k8s-pod-network.887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.763 [INFO][5000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:45.838808 containerd[1821]: 2026-01-24 00:49:45.763 [INFO][5000] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.68/26] IPv6=[] ContainerID="887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" HandleID="k8s-pod-network.887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:49:45.842586 containerd[1821]: 2026-01-24 00:49:45.766 [INFO][4976] cni-plugin/k8s.go 418: Populated endpoint ContainerID="887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" Namespace="kube-system" Pod="coredns-668d6bf9bc-rznsh" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ff942fbf-7f43-4ef8-9f92-2e10cfb795ba", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"", Pod:"coredns-668d6bf9bc-rznsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18bb639d94f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:45.842586 containerd[1821]: 2026-01-24 00:49:45.766 [INFO][4976] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.68/32] ContainerID="887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" Namespace="kube-system" Pod="coredns-668d6bf9bc-rznsh" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:49:45.842586 containerd[1821]: 2026-01-24 00:49:45.767 [INFO][4976] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18bb639d94f ContainerID="887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" Namespace="kube-system" Pod="coredns-668d6bf9bc-rznsh" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:49:45.842586 containerd[1821]: 2026-01-24 00:49:45.772 [INFO][4976] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" Namespace="kube-system" Pod="coredns-668d6bf9bc-rznsh" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:49:45.842586 containerd[1821]: 2026-01-24 00:49:45.776 [INFO][4976] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" Namespace="kube-system" Pod="coredns-668d6bf9bc-rznsh" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ff942fbf-7f43-4ef8-9f92-2e10cfb795ba", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616", Pod:"coredns-668d6bf9bc-rznsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18bb639d94f", MAC:"62:58:20:70:dd:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:45.842586 containerd[1821]: 2026-01-24 00:49:45.804 [INFO][4976] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616" Namespace="kube-system" Pod="coredns-668d6bf9bc-rznsh" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:49:45.862939 containerd[1821]: time="2026-01-24T00:49:45.861961968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-cdd95,Uid:7b5279fd-856f-4138-b3c1-0703370fedaa,Namespace:kube-system,Attempt:1,} returns sandbox id \"e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f\"" Jan 24 00:49:45.872255 containerd[1821]: time="2026-01-24T00:49:45.871985969Z" level=info msg="CreateContainer within sandbox \"e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:49:45.896899 containerd[1821]: time="2026-01-24T00:49:45.895611907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:45.897244 containerd[1821]: time="2026-01-24T00:49:45.896871020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:45.897244 containerd[1821]: time="2026-01-24T00:49:45.897092922Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:45.897796 containerd[1821]: time="2026-01-24T00:49:45.897545727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:45.914975 containerd[1821]: time="2026-01-24T00:49:45.914854802Z" level=info msg="CreateContainer within sandbox \"e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"465ea47130ddf4b4eda749e82c28f0072a6d045e7e560808f9126d6ee9fe57d1\"" Jan 24 00:49:45.915807 containerd[1821]: time="2026-01-24T00:49:45.915622609Z" level=info msg="StartContainer for \"465ea47130ddf4b4eda749e82c28f0072a6d045e7e560808f9126d6ee9fe57d1\"" Jan 24 00:49:45.987848 containerd[1821]: time="2026-01-24T00:49:45.987783538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-rznsh,Uid:ff942fbf-7f43-4ef8-9f92-2e10cfb795ba,Namespace:kube-system,Attempt:1,} returns sandbox id \"887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616\"" Jan 24 00:49:45.996294 containerd[1821]: time="2026-01-24T00:49:45.996110722Z" level=info msg="CreateContainer within sandbox \"887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:49:46.012538 containerd[1821]: time="2026-01-24T00:49:46.011109673Z" level=info msg="StartContainer for \"465ea47130ddf4b4eda749e82c28f0072a6d045e7e560808f9126d6ee9fe57d1\" returns successfully" Jan 24 00:49:46.033536 containerd[1821]: time="2026-01-24T00:49:46.033474499Z" level=info msg="CreateContainer within sandbox \"887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b8d35a3cf418e57ccb2e3bbe7cb5b5444e7f12f0d10f79cbc1a106ebb479a916\"" Jan 24 00:49:46.037028 containerd[1821]: time="2026-01-24T00:49:46.036990435Z" level=info msg="StartContainer for \"b8d35a3cf418e57ccb2e3bbe7cb5b5444e7f12f0d10f79cbc1a106ebb479a916\"" Jan 24 00:49:46.123295 containerd[1821]: time="2026-01-24T00:49:46.122484798Z" level=info msg="StartContainer for \"b8d35a3cf418e57ccb2e3bbe7cb5b5444e7f12f0d10f79cbc1a106ebb479a916\" returns successfully" Jan 24 00:49:46.300280 containerd[1821]: time="2026-01-24T00:49:46.300224792Z" level=info msg="StopPodSandbox for \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\"" Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.351 [INFO][5196] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.352 [INFO][5196] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" iface="eth0" netns="/var/run/netns/cni-555826f0-d04a-1ff7-6f1f-dd9f835c639f" Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.353 [INFO][5196] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" iface="eth0" netns="/var/run/netns/cni-555826f0-d04a-1ff7-6f1f-dd9f835c639f" Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.353 [INFO][5196] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" iface="eth0" netns="/var/run/netns/cni-555826f0-d04a-1ff7-6f1f-dd9f835c639f" Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.353 [INFO][5196] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.353 [INFO][5196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.376 [INFO][5203] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" HandleID="k8s-pod-network.77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.376 [INFO][5203] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.376 [INFO][5203] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.382 [WARNING][5203] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" HandleID="k8s-pod-network.77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.382 [INFO][5203] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" HandleID="k8s-pod-network.77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.384 [INFO][5203] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:46.394928 containerd[1821]: 2026-01-24 00:49:46.389 [INFO][5196] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:46.395495 containerd[1821]: time="2026-01-24T00:49:46.395042549Z" level=info msg="TearDown network for sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\" successfully" Jan 24 00:49:46.395495 containerd[1821]: time="2026-01-24T00:49:46.395086950Z" level=info msg="StopPodSandbox for \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\" returns successfully" Jan 24 00:49:46.397795 containerd[1821]: time="2026-01-24T00:49:46.395892158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87ffcbb4-d45k5,Uid:feb072f7-3316-4b11-9780-0976f355dc5e,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:49:46.405403 systemd[1]: run-netns-cni\x2d555826f0\x2dd04a\x2d1ff7\x2d6f1f\x2ddd9f835c639f.mount: Deactivated successfully. Jan 24 00:49:46.578671 kubelet[3388]: E0124 00:49:46.578589 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:49:46.600641 systemd-networkd[1395]: cali258bfe9ee26: Gained IPv6LL Jan 24 00:49:46.605948 systemd-networkd[1395]: calia973dd7dbb0: Link UP Jan 24 00:49:46.606225 systemd-networkd[1395]: calia973dd7dbb0: Gained carrier Jan 24 00:49:46.631589 kubelet[3388]: I0124 00:49:46.631425 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-cdd95" podStartSLOduration=43.631400535 podStartE2EDuration="43.631400535s" podCreationTimestamp="2026-01-24 00:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:49:46.610500324 +0000 UTC m=+48.410093641" watchObservedRunningTime="2026-01-24 00:49:46.631400535 +0000 UTC m=+48.430993852" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.485 [INFO][5210] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0 calico-apiserver-7d87ffcbb4- calico-apiserver feb072f7-3316-4b11-9780-0976f355dc5e 984 0 2026-01-24 00:49:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d87ffcbb4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-d923855e69 calico-apiserver-7d87ffcbb4-d45k5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia973dd7dbb0 [] [] }} ContainerID="184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-d45k5" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.485 [INFO][5210] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-d45k5" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.519 [INFO][5222] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" HandleID="k8s-pod-network.184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.521 [INFO][5222] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" HandleID="k8s-pod-network.184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f200), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-d923855e69", "pod":"calico-apiserver-7d87ffcbb4-d45k5", "timestamp":"2026-01-24 00:49:46.519572206 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d923855e69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.521 [INFO][5222] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.521 [INFO][5222] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.521 [INFO][5222] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d923855e69' Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.530 [INFO][5222] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.537 [INFO][5222] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.543 [INFO][5222] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.545 [INFO][5222] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.550 [INFO][5222] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.551 [INFO][5222] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.554 [INFO][5222] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7 Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.563 [INFO][5222] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.583 [INFO][5222] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.69/26] block=192.168.88.64/26 handle="k8s-pod-network.184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.583 [INFO][5222] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.69/26] handle="k8s-pod-network.184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.583 [INFO][5222] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:46.635064 containerd[1821]: 2026-01-24 00:49:46.583 [INFO][5222] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.69/26] IPv6=[] ContainerID="184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" HandleID="k8s-pod-network.184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:46.637559 containerd[1821]: 2026-01-24 00:49:46.590 [INFO][5210] cni-plugin/k8s.go 418: Populated endpoint ContainerID="184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-d45k5" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0", GenerateName:"calico-apiserver-7d87ffcbb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"feb072f7-3316-4b11-9780-0976f355dc5e", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87ffcbb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"", Pod:"calico-apiserver-7d87ffcbb4-d45k5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia973dd7dbb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:46.637559 containerd[1821]: 2026-01-24 00:49:46.590 [INFO][5210] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.69/32] ContainerID="184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-d45k5" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:46.637559 containerd[1821]: 2026-01-24 00:49:46.590 [INFO][5210] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia973dd7dbb0 ContainerID="184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-d45k5" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:46.637559 containerd[1821]: 2026-01-24 00:49:46.606 [INFO][5210] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-d45k5" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:46.637559 containerd[1821]: 2026-01-24 00:49:46.608 [INFO][5210] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-d45k5" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0", GenerateName:"calico-apiserver-7d87ffcbb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"feb072f7-3316-4b11-9780-0976f355dc5e", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87ffcbb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7", Pod:"calico-apiserver-7d87ffcbb4-d45k5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia973dd7dbb0", MAC:"36:45:6b:a7:82:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:46.637559 containerd[1821]: 2026-01-24 00:49:46.632 [INFO][5210] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7" Namespace="calico-apiserver" Pod="calico-apiserver-7d87ffcbb4-d45k5" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:46.674892 kubelet[3388]: I0124 00:49:46.674025 3388 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-rznsh" podStartSLOduration=43.673973765 podStartE2EDuration="43.673973765s" podCreationTimestamp="2026-01-24 00:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:49:46.672066646 +0000 UTC m=+48.471659963" watchObservedRunningTime="2026-01-24 00:49:46.673973765 +0000 UTC m=+48.473566982" Jan 24 00:49:46.692180 containerd[1821]: time="2026-01-24T00:49:46.690560233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:46.695918 containerd[1821]: time="2026-01-24T00:49:46.695404981Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:46.697902 containerd[1821]: time="2026-01-24T00:49:46.695891486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:46.697902 containerd[1821]: time="2026-01-24T00:49:46.697627304Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:46.814325 containerd[1821]: time="2026-01-24T00:49:46.814252081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d87ffcbb4-d45k5,Uid:feb072f7-3316-4b11-9780-0976f355dc5e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7\"" Jan 24 00:49:46.816346 containerd[1821]: time="2026-01-24T00:49:46.816234101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:49:47.083294 containerd[1821]: time="2026-01-24T00:49:47.083222297Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:47.086634 containerd[1821]: time="2026-01-24T00:49:47.086579030Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:49:47.086742 containerd[1821]: time="2026-01-24T00:49:47.086668731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:49:47.086985 kubelet[3388]: E0124 00:49:47.086941 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:49:47.087076 kubelet[3388]: E0124 00:49:47.086998 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:49:47.087199 kubelet[3388]: E0124 00:49:47.087154 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8jrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d87ffcbb4-d45k5_calico-apiserver(feb072f7-3316-4b11-9780-0976f355dc5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:47.088657 kubelet[3388]: E0124 00:49:47.088617 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:49:47.554018 systemd-networkd[1395]: cali18bb639d94f: Gained IPv6LL Jan 24 00:49:47.579409 kubelet[3388]: E0124 00:49:47.579347 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:49:47.679636 kubelet[3388]: I0124 00:49:47.679033 3388 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:49:47.682995 systemd-networkd[1395]: calia973dd7dbb0: Gained IPv6LL Jan 24 00:49:47.683348 systemd-networkd[1395]: cali2ec281444a0: Gained IPv6LL Jan 24 00:49:47.795836 systemd[1]: run-containerd-runc-k8s.io-f8d0e2c7c1c055a1a83c2d9d60485a1223dfec32d568d840eb33028bc891f83f-runc.ktPYNk.mount: Deactivated successfully. Jan 24 00:49:48.301907 containerd[1821]: time="2026-01-24T00:49:48.301620797Z" level=info msg="StopPodSandbox for \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\"" Jan 24 00:49:48.302549 containerd[1821]: time="2026-01-24T00:49:48.302204103Z" level=info msg="StopPodSandbox for \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\"" Jan 24 00:49:48.306489 containerd[1821]: time="2026-01-24T00:49:48.305354834Z" level=info msg="StopPodSandbox for \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\"" Jan 24 00:49:48.590607 kubelet[3388]: E0124 00:49:48.590386 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.429 [INFO][5356] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.430 [INFO][5356] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" iface="eth0" netns="/var/run/netns/cni-f5283432-b54e-ec7f-3279-47d4d6d8c879" Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.431 [INFO][5356] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" iface="eth0" netns="/var/run/netns/cni-f5283432-b54e-ec7f-3279-47d4d6d8c879" Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.432 [INFO][5356] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" iface="eth0" netns="/var/run/netns/cni-f5283432-b54e-ec7f-3279-47d4d6d8c879" Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.432 [INFO][5356] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.432 [INFO][5356] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.554 [INFO][5383] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" HandleID="k8s-pod-network.260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.556 [INFO][5383] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.556 [INFO][5383] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.577 [WARNING][5383] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" HandleID="k8s-pod-network.260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.577 [INFO][5383] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" HandleID="k8s-pod-network.260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.579 [INFO][5383] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:48.601268 containerd[1821]: 2026-01-24 00:49:48.587 [INFO][5356] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:48.601268 containerd[1821]: time="2026-01-24T00:49:48.599672706Z" level=info msg="TearDown network for sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\" successfully" Jan 24 00:49:48.601268 containerd[1821]: time="2026-01-24T00:49:48.599703406Z" level=info msg="StopPodSandbox for \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\" returns successfully" Jan 24 00:49:48.610328 containerd[1821]: time="2026-01-24T00:49:48.608279092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w66c2,Uid:52d6adb2-e5fc-4ea6-8c92-021d49b0142f,Namespace:calico-system,Attempt:1,}" Jan 24 00:49:48.609990 systemd[1]: run-netns-cni\x2df5283432\x2db54e\x2dec7f\x2d3279\x2d47d4d6d8c879.mount: Deactivated successfully. Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.421 [INFO][5364] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.421 [INFO][5364] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" iface="eth0" netns="/var/run/netns/cni-6e709ec2-fe1e-1459-df17-23016f694368" Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.421 [INFO][5364] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" iface="eth0" netns="/var/run/netns/cni-6e709ec2-fe1e-1459-df17-23016f694368" Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.422 [INFO][5364] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" iface="eth0" netns="/var/run/netns/cni-6e709ec2-fe1e-1459-df17-23016f694368" Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.422 [INFO][5364] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.422 [INFO][5364] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.565 [INFO][5381] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" HandleID="k8s-pod-network.75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.565 [INFO][5381] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.581 [INFO][5381] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.608 [WARNING][5381] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" HandleID="k8s-pod-network.75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.609 [INFO][5381] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" HandleID="k8s-pod-network.75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.618 [INFO][5381] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:48.626321 containerd[1821]: 2026-01-24 00:49:48.624 [INFO][5364] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:48.635467 containerd[1821]: time="2026-01-24T00:49:48.634865161Z" level=info msg="TearDown network for sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\" successfully" Jan 24 00:49:48.635467 containerd[1821]: time="2026-01-24T00:49:48.634902161Z" level=info msg="StopPodSandbox for \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\" returns successfully" Jan 24 00:49:48.637265 containerd[1821]: time="2026-01-24T00:49:48.637237885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5bf95d6-rk94n,Uid:37cb1a00-f2a5-4886-98cf-7e9aeba0026f,Namespace:calico-system,Attempt:1,}" Jan 24 00:49:48.637856 systemd[1]: run-netns-cni\x2d6e709ec2\x2dfe1e\x2d1459\x2ddf17\x2d23016f694368.mount: Deactivated successfully. Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.458 [INFO][5363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.461 [INFO][5363] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" iface="eth0" netns="/var/run/netns/cni-5460b305-1472-9265-c07d-3f40827c778d" Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.462 [INFO][5363] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" iface="eth0" netns="/var/run/netns/cni-5460b305-1472-9265-c07d-3f40827c778d" Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.462 [INFO][5363] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" iface="eth0" netns="/var/run/netns/cni-5460b305-1472-9265-c07d-3f40827c778d" Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.462 [INFO][5363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.462 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.641 [INFO][5391] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" HandleID="k8s-pod-network.9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.642 [INFO][5391] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.642 [INFO][5391] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.652 [WARNING][5391] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" HandleID="k8s-pod-network.9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.652 [INFO][5391] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" HandleID="k8s-pod-network.9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.654 [INFO][5391] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:48.659973 containerd[1821]: 2026-01-24 00:49:48.657 [INFO][5363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:49:48.662911 containerd[1821]: time="2026-01-24T00:49:48.660879023Z" level=info msg="TearDown network for sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\" successfully" Jan 24 00:49:48.662911 containerd[1821]: time="2026-01-24T00:49:48.660914424Z" level=info msg="StopPodSandbox for \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\" returns successfully" Jan 24 00:49:48.664035 containerd[1821]: time="2026-01-24T00:49:48.663547550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7pw49,Uid:6da5a353-0459-4899-8898-8a79910e38eb,Namespace:calico-system,Attempt:1,}" Jan 24 00:49:48.670652 systemd[1]: run-netns-cni\x2d5460b305\x2d1472\x2d9265\x2dc07d\x2d3f40827c778d.mount: Deactivated successfully. Jan 24 00:49:48.963464 systemd-networkd[1395]: cali3567a7efd55: Link UP Jan 24 00:49:48.972047 systemd-networkd[1395]: cali3567a7efd55: Gained carrier Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.752 [INFO][5401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0 csi-node-driver- calico-system 52d6adb2-e5fc-4ea6-8c92-021d49b0142f 1022 0 2026-01-24 00:49:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-d923855e69 csi-node-driver-w66c2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3567a7efd55 [] [] }} ContainerID="8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" Namespace="calico-system" Pod="csi-node-driver-w66c2" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.753 [INFO][5401] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" Namespace="calico-system" Pod="csi-node-driver-w66c2" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.834 [INFO][5436] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" HandleID="k8s-pod-network.8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.835 [INFO][5436] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" HandleID="k8s-pod-network.8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e100), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d923855e69", "pod":"csi-node-driver-w66c2", "timestamp":"2026-01-24 00:49:48.834506476 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d923855e69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.835 [INFO][5436] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.836 [INFO][5436] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.836 [INFO][5436] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d923855e69' Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.852 [INFO][5436] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.861 [INFO][5436] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.872 [INFO][5436] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.878 [INFO][5436] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.888 [INFO][5436] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.888 [INFO][5436] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.892 [INFO][5436] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.904 [INFO][5436] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.915 [INFO][5436] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.70/26] block=192.168.88.64/26 handle="k8s-pod-network.8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.916 [INFO][5436] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.70/26] handle="k8s-pod-network.8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.916 [INFO][5436] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:49.008079 containerd[1821]: 2026-01-24 00:49:48.916 [INFO][5436] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.70/26] IPv6=[] ContainerID="8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" HandleID="k8s-pod-network.8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:49.009551 containerd[1821]: 2026-01-24 00:49:48.933 [INFO][5401] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" Namespace="calico-system" Pod="csi-node-driver-w66c2" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52d6adb2-e5fc-4ea6-8c92-021d49b0142f", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"", Pod:"csi-node-driver-w66c2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3567a7efd55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:49.009551 containerd[1821]: 2026-01-24 00:49:48.935 [INFO][5401] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.70/32] ContainerID="8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" Namespace="calico-system" Pod="csi-node-driver-w66c2" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:49.009551 containerd[1821]: 2026-01-24 00:49:48.935 [INFO][5401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3567a7efd55 ContainerID="8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" Namespace="calico-system" Pod="csi-node-driver-w66c2" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:49.009551 containerd[1821]: 2026-01-24 00:49:48.975 [INFO][5401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" Namespace="calico-system" Pod="csi-node-driver-w66c2" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:49.009551 containerd[1821]: 2026-01-24 00:49:48.976 [INFO][5401] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" Namespace="calico-system" Pod="csi-node-driver-w66c2" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52d6adb2-e5fc-4ea6-8c92-021d49b0142f", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e", Pod:"csi-node-driver-w66c2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3567a7efd55", MAC:"7e:92:bd:fc:f8:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:49.009551 containerd[1821]: 2026-01-24 00:49:48.999 [INFO][5401] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e" Namespace="calico-system" Pod="csi-node-driver-w66c2" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:49.052903 systemd-networkd[1395]: cali7e8a5466abe: Link UP Jan 24 00:49:49.067035 systemd-networkd[1395]: cali7e8a5466abe: Gained carrier Jan 24 00:49:49.098141 containerd[1821]: time="2026-01-24T00:49:49.097928336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:49.098784 containerd[1821]: time="2026-01-24T00:49:49.098237639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:49.098784 containerd[1821]: time="2026-01-24T00:49:49.098255339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:49.098784 containerd[1821]: time="2026-01-24T00:49:49.098561842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:48.786 [INFO][5413] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0 calico-kube-controllers-c5bf95d6- calico-system 37cb1a00-f2a5-4886-98cf-7e9aeba0026f 1021 0 2026-01-24 00:49:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c5bf95d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-d923855e69 calico-kube-controllers-c5bf95d6-rk94n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7e8a5466abe [] [] }} ContainerID="07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" Namespace="calico-system" Pod="calico-kube-controllers-c5bf95d6-rk94n" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:48.787 [INFO][5413] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" Namespace="calico-system" Pod="calico-kube-controllers-c5bf95d6-rk94n" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:48.890 [INFO][5446] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" HandleID="k8s-pod-network.07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:48.891 [INFO][5446] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" HandleID="k8s-pod-network.07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f320), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d923855e69", "pod":"calico-kube-controllers-c5bf95d6-rk94n", "timestamp":"2026-01-24 00:49:48.890984846 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d923855e69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:48.892 [INFO][5446] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:48.916 [INFO][5446] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:48.916 [INFO][5446] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d923855e69' Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:48.955 [INFO][5446] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:48.976 [INFO][5446] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:48.995 [INFO][5446] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:48.997 [INFO][5446] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:49.003 [INFO][5446] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:49.003 [INFO][5446] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:49.007 [INFO][5446] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3 Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:49.017 [INFO][5446] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:49.033 [INFO][5446] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.71/26] block=192.168.88.64/26 handle="k8s-pod-network.07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:49.034 [INFO][5446] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.71/26] handle="k8s-pod-network.07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:49.034 [INFO][5446] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:49.114351 containerd[1821]: 2026-01-24 00:49:49.034 [INFO][5446] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.71/26] IPv6=[] ContainerID="07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" HandleID="k8s-pod-network.07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:49.115752 containerd[1821]: 2026-01-24 00:49:49.041 [INFO][5413] cni-plugin/k8s.go 418: Populated endpoint ContainerID="07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" Namespace="calico-system" Pod="calico-kube-controllers-c5bf95d6-rk94n" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0", GenerateName:"calico-kube-controllers-c5bf95d6-", Namespace:"calico-system", SelfLink:"", UID:"37cb1a00-f2a5-4886-98cf-7e9aeba0026f", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c5bf95d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"", Pod:"calico-kube-controllers-c5bf95d6-rk94n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7e8a5466abe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:49.115752 containerd[1821]: 2026-01-24 00:49:49.041 [INFO][5413] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.71/32] ContainerID="07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" Namespace="calico-system" Pod="calico-kube-controllers-c5bf95d6-rk94n" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:49.115752 containerd[1821]: 2026-01-24 00:49:49.041 [INFO][5413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7e8a5466abe ContainerID="07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" Namespace="calico-system" Pod="calico-kube-controllers-c5bf95d6-rk94n" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:49.115752 containerd[1821]: 2026-01-24 00:49:49.061 [INFO][5413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" Namespace="calico-system" Pod="calico-kube-controllers-c5bf95d6-rk94n" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:49.115752 containerd[1821]: 2026-01-24 00:49:49.072 [INFO][5413] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" Namespace="calico-system" Pod="calico-kube-controllers-c5bf95d6-rk94n" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0", GenerateName:"calico-kube-controllers-c5bf95d6-", Namespace:"calico-system", SelfLink:"", UID:"37cb1a00-f2a5-4886-98cf-7e9aeba0026f", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c5bf95d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3", Pod:"calico-kube-controllers-c5bf95d6-rk94n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7e8a5466abe", MAC:"ca:52:9d:50:1a:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:49.115752 containerd[1821]: 2026-01-24 00:49:49.110 [INFO][5413] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3" Namespace="calico-system" Pod="calico-kube-controllers-c5bf95d6-rk94n" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:49.176406 systemd-networkd[1395]: calif0bf9bc61cf: Link UP Jan 24 00:49:49.176698 systemd-networkd[1395]: calif0bf9bc61cf: Gained carrier Jan 24 00:49:49.193678 containerd[1821]: time="2026-01-24T00:49:49.191114376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:49.193678 containerd[1821]: time="2026-01-24T00:49:49.191183477Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:49.193678 containerd[1821]: time="2026-01-24T00:49:49.191204777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:49.193678 containerd[1821]: time="2026-01-24T00:49:49.191310678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:48.805 [INFO][5426] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0 goldmane-666569f655- calico-system 6da5a353-0459-4899-8898-8a79910e38eb 1023 0 2026-01-24 00:49:16 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-d923855e69 goldmane-666569f655-7pw49 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calif0bf9bc61cf [] [] }} ContainerID="fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" Namespace="calico-system" Pod="goldmane-666569f655-7pw49" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:48.805 [INFO][5426] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" Namespace="calico-system" Pod="goldmane-666569f655-7pw49" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:48.909 [INFO][5451] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" HandleID="k8s-pod-network.fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:48.910 [INFO][5451] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" HandleID="k8s-pod-network.fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037cdc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-d923855e69", "pod":"goldmane-666569f655-7pw49", "timestamp":"2026-01-24 00:49:48.909730436 +0000 UTC"}, Hostname:"ci-4081.3.6-n-d923855e69", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:48.910 [INFO][5451] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.037 [INFO][5451] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.037 [INFO][5451] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-d923855e69' Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.063 [INFO][5451] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.080 [INFO][5451] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.094 [INFO][5451] ipam/ipam.go 511: Trying affinity for 192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.096 [INFO][5451] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.115 [INFO][5451] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.64/26 host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.117 [INFO][5451] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.88.64/26 handle="k8s-pod-network.fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.123 [INFO][5451] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.138 [INFO][5451] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.88.64/26 handle="k8s-pod-network.fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.157 [INFO][5451] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.88.72/26] block=192.168.88.64/26 handle="k8s-pod-network.fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.157 [INFO][5451] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.72/26] handle="k8s-pod-network.fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" host="ci-4081.3.6-n-d923855e69" Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.158 [INFO][5451] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:49.214935 containerd[1821]: 2026-01-24 00:49:49.158 [INFO][5451] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.88.72/26] IPv6=[] ContainerID="fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" HandleID="k8s-pod-network.fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:49.215868 containerd[1821]: 2026-01-24 00:49:49.166 [INFO][5426] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" Namespace="calico-system" Pod="goldmane-666569f655-7pw49" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6da5a353-0459-4899-8898-8a79910e38eb", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"", Pod:"goldmane-666569f655-7pw49", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0bf9bc61cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:49.215868 containerd[1821]: 2026-01-24 00:49:49.167 [INFO][5426] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.72/32] ContainerID="fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" Namespace="calico-system" Pod="goldmane-666569f655-7pw49" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:49.215868 containerd[1821]: 2026-01-24 00:49:49.167 [INFO][5426] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif0bf9bc61cf ContainerID="fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" Namespace="calico-system" Pod="goldmane-666569f655-7pw49" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:49.215868 containerd[1821]: 2026-01-24 00:49:49.175 [INFO][5426] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" Namespace="calico-system" Pod="goldmane-666569f655-7pw49" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:49.215868 containerd[1821]: 2026-01-24 00:49:49.176 [INFO][5426] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" Namespace="calico-system" Pod="goldmane-666569f655-7pw49" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6da5a353-0459-4899-8898-8a79910e38eb", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e", Pod:"goldmane-666569f655-7pw49", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0bf9bc61cf", MAC:"fe:ac:6b:21:6a:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:49.215868 containerd[1821]: 2026-01-24 00:49:49.209 [INFO][5426] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e" Namespace="calico-system" Pod="goldmane-666569f655-7pw49" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:49.259879 containerd[1821]: time="2026-01-24T00:49:49.259286965Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:49:49.259879 containerd[1821]: time="2026-01-24T00:49:49.259381866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:49:49.259879 containerd[1821]: time="2026-01-24T00:49:49.259398866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:49.261814 containerd[1821]: time="2026-01-24T00:49:49.260202474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:49:49.309914 containerd[1821]: time="2026-01-24T00:49:49.309869775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w66c2,Uid:52d6adb2-e5fc-4ea6-8c92-021d49b0142f,Namespace:calico-system,Attempt:1,} returns sandbox id \"8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e\"" Jan 24 00:49:49.313014 containerd[1821]: time="2026-01-24T00:49:49.312960406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:49:49.457790 containerd[1821]: time="2026-01-24T00:49:49.451720507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c5bf95d6-rk94n,Uid:37cb1a00-f2a5-4886-98cf-7e9aeba0026f,Namespace:calico-system,Attempt:1,} returns sandbox id \"07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3\"" Jan 24 00:49:49.486370 containerd[1821]: time="2026-01-24T00:49:49.486327057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-7pw49,Uid:6da5a353-0459-4899-8898-8a79910e38eb,Namespace:calico-system,Attempt:1,} returns sandbox id \"fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e\"" Jan 24 00:49:49.590364 containerd[1821]: time="2026-01-24T00:49:49.589729000Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:49.595452 containerd[1821]: time="2026-01-24T00:49:49.595402358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:49:49.595567 containerd[1821]: time="2026-01-24T00:49:49.595477558Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:49:49.596787 kubelet[3388]: E0124 00:49:49.595853 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:49:49.596787 kubelet[3388]: E0124 00:49:49.595908 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:49:49.596787 kubelet[3388]: E0124 00:49:49.596111 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7fq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66c2_calico-system(52d6adb2-e5fc-4ea6-8c92-021d49b0142f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:49.597800 containerd[1821]: time="2026-01-24T00:49:49.597600480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:49:49.878672 containerd[1821]: time="2026-01-24T00:49:49.878399515Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:49.882845 containerd[1821]: time="2026-01-24T00:49:49.882779959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:49:49.882968 containerd[1821]: time="2026-01-24T00:49:49.882921760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:49:49.883183 kubelet[3388]: E0124 00:49:49.883139 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:49:49.883288 kubelet[3388]: E0124 00:49:49.883201 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:49:49.883552 kubelet[3388]: E0124 00:49:49.883480 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2n6cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c5bf95d6-rk94n_calico-system(37cb1a00-f2a5-4886-98cf-7e9aeba0026f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:49.884056 containerd[1821]: time="2026-01-24T00:49:49.884023171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:49:49.885727 kubelet[3388]: E0124 00:49:49.885568 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:49:50.167896 containerd[1821]: time="2026-01-24T00:49:50.167712423Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:50.171252 containerd[1821]: time="2026-01-24T00:49:50.171195156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:49:50.171385 containerd[1821]: time="2026-01-24T00:49:50.171324458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:49:50.173021 kubelet[3388]: E0124 00:49:50.172934 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:49:50.173021 kubelet[3388]: E0124 00:49:50.173000 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:49:50.173363 kubelet[3388]: E0124 00:49:50.173281 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmzph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7pw49_calico-system(6da5a353-0459-4899-8898-8a79910e38eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:50.173866 containerd[1821]: time="2026-01-24T00:49:50.173819882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:49:50.174487 kubelet[3388]: E0124 00:49:50.174442 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:49:50.445207 containerd[1821]: time="2026-01-24T00:49:50.445050115Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:50.448406 containerd[1821]: time="2026-01-24T00:49:50.448303246Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:49:50.448406 containerd[1821]: time="2026-01-24T00:49:50.448348347Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:49:50.448635 kubelet[3388]: E0124 00:49:50.448582 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:49:50.448726 kubelet[3388]: E0124 00:49:50.448649 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:49:50.448903 kubelet[3388]: E0124 00:49:50.448837 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7fq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66c2_calico-system(52d6adb2-e5fc-4ea6-8c92-021d49b0142f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:50.450435 kubelet[3388]: E0124 00:49:50.450326 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:49:50.612213 kubelet[3388]: E0124 00:49:50.612159 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:49:50.614414 kubelet[3388]: E0124 00:49:50.612409 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:49:50.615559 kubelet[3388]: E0124 00:49:50.615397 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:49:50.625894 systemd-networkd[1395]: cali3567a7efd55: Gained IPv6LL Jan 24 00:49:50.818978 systemd-networkd[1395]: cali7e8a5466abe: Gained IPv6LL Jan 24 00:49:51.074175 systemd-networkd[1395]: calif0bf9bc61cf: Gained IPv6LL Jan 24 00:49:54.300599 containerd[1821]: time="2026-01-24T00:49:54.300298041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:49:54.575576 containerd[1821]: time="2026-01-24T00:49:54.575430912Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:54.582754 containerd[1821]: time="2026-01-24T00:49:54.582699883Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:49:54.582902 containerd[1821]: time="2026-01-24T00:49:54.582812484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:49:54.583006 kubelet[3388]: E0124 00:49:54.582965 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:49:54.583417 kubelet[3388]: E0124 00:49:54.583022 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:49:54.583417 kubelet[3388]: E0124 00:49:54.583152 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5d4d61cbab0549149b9649b46b6d3269,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4px27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55c747bbf5-8vn48_calico-system(cdf55fad-ab11-450d-ad4f-7c531f40d0f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:54.585799 containerd[1821]: time="2026-01-24T00:49:54.585754212Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:49:54.851827 containerd[1821]: time="2026-01-24T00:49:54.851599893Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:49:54.854905 containerd[1821]: time="2026-01-24T00:49:54.854855425Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:49:54.855042 containerd[1821]: time="2026-01-24T00:49:54.854865325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:49:54.855180 kubelet[3388]: E0124 00:49:54.855130 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:49:54.855255 kubelet[3388]: E0124 00:49:54.855186 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:49:54.855364 kubelet[3388]: E0124 00:49:54.855326 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4px27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55c747bbf5-8vn48_calico-system(cdf55fad-ab11-450d-ad4f-7c531f40d0f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:49:54.856909 kubelet[3388]: E0124 00:49:54.856837 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:49:58.312673 containerd[1821]: time="2026-01-24T00:49:58.312629516Z" level=info msg="StopPodSandbox for \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\"" Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.418 [WARNING][5635] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0", GenerateName:"calico-kube-controllers-c5bf95d6-", Namespace:"calico-system", SelfLink:"", UID:"37cb1a00-f2a5-4886-98cf-7e9aeba0026f", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c5bf95d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3", Pod:"calico-kube-controllers-c5bf95d6-rk94n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7e8a5466abe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.418 [INFO][5635] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.418 [INFO][5635] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" iface="eth0" netns="" Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.418 [INFO][5635] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.418 [INFO][5635] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.496 [INFO][5642] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" HandleID="k8s-pod-network.75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.497 [INFO][5642] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.497 [INFO][5642] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.537 [WARNING][5642] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" HandleID="k8s-pod-network.75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.538 [INFO][5642] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" HandleID="k8s-pod-network.75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.539 [INFO][5642] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:58.548314 containerd[1821]: 2026-01-24 00:49:58.544 [INFO][5635] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:58.548314 containerd[1821]: time="2026-01-24T00:49:58.547323227Z" level=info msg="TearDown network for sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\" successfully" Jan 24 00:49:58.548314 containerd[1821]: time="2026-01-24T00:49:58.547386328Z" level=info msg="StopPodSandbox for \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\" returns successfully" Jan 24 00:49:58.553824 containerd[1821]: time="2026-01-24T00:49:58.548939543Z" level=info msg="RemovePodSandbox for \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\"" Jan 24 00:49:58.553824 containerd[1821]: time="2026-01-24T00:49:58.548974043Z" level=info msg="Forcibly stopping sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\"" Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.597 [WARNING][5657] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0", GenerateName:"calico-kube-controllers-c5bf95d6-", Namespace:"calico-system", SelfLink:"", UID:"37cb1a00-f2a5-4886-98cf-7e9aeba0026f", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c5bf95d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"07a0d620629ea4b68c840f69fbf53c4a4e59f4a1e9bedfe2156e31be265e81f3", Pod:"calico-kube-controllers-c5bf95d6-rk94n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7e8a5466abe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.597 [INFO][5657] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.597 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" iface="eth0" netns="" Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.597 [INFO][5657] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.597 [INFO][5657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.630 [INFO][5664] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" HandleID="k8s-pod-network.75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.630 [INFO][5664] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.630 [INFO][5664] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.637 [WARNING][5664] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" HandleID="k8s-pod-network.75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.638 [INFO][5664] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" HandleID="k8s-pod-network.75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--kube--controllers--c5bf95d6--rk94n-eth0" Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.639 [INFO][5664] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:58.643611 containerd[1821]: 2026-01-24 00:49:58.641 [INFO][5657] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021" Jan 24 00:49:58.643611 containerd[1821]: time="2026-01-24T00:49:58.643508375Z" level=info msg="TearDown network for sandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\" successfully" Jan 24 00:49:58.655377 containerd[1821]: time="2026-01-24T00:49:58.655336291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:58.655601 containerd[1821]: time="2026-01-24T00:49:58.655540393Z" level=info msg="RemovePodSandbox \"75e299907f53c5ad4621f07bf577c1f49bdc67656adee2d781af5a0a40819021\" returns successfully" Jan 24 00:49:58.656429 containerd[1821]: time="2026-01-24T00:49:58.656225800Z" level=info msg="StopPodSandbox for \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\"" Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.727 [WARNING][5678] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7b5279fd-856f-4138-b3c1-0703370fedaa", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f", Pod:"coredns-668d6bf9bc-cdd95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ec281444a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.727 [INFO][5678] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.728 [INFO][5678] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" iface="eth0" netns="" Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.728 [INFO][5678] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.728 [INFO][5678] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.763 [INFO][5686] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" HandleID="k8s-pod-network.8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.763 [INFO][5686] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.763 [INFO][5686] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.771 [WARNING][5686] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" HandleID="k8s-pod-network.8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.771 [INFO][5686] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" HandleID="k8s-pod-network.8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.772 [INFO][5686] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:58.775895 containerd[1821]: 2026-01-24 00:49:58.773 [INFO][5678] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:58.776550 containerd[1821]: time="2026-01-24T00:49:58.775939279Z" level=info msg="TearDown network for sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\" successfully" Jan 24 00:49:58.776550 containerd[1821]: time="2026-01-24T00:49:58.775968479Z" level=info msg="StopPodSandbox for \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\" returns successfully" Jan 24 00:49:58.777009 containerd[1821]: time="2026-01-24T00:49:58.776980589Z" level=info msg="RemovePodSandbox for \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\"" Jan 24 00:49:58.777118 containerd[1821]: time="2026-01-24T00:49:58.777012090Z" level=info msg="Forcibly stopping sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\"" Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.828 [WARNING][5700] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"7b5279fd-856f-4138-b3c1-0703370fedaa", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"e869b85687b64c35e78efff3e8e16d9b2deec407330df520dec16712067e417f", Pod:"coredns-668d6bf9bc-cdd95", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2ec281444a0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.828 [INFO][5700] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.828 [INFO][5700] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" iface="eth0" netns="" Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.828 [INFO][5700] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.828 [INFO][5700] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.871 [INFO][5707] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" HandleID="k8s-pod-network.8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.873 [INFO][5707] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.873 [INFO][5707] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.882 [WARNING][5707] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" HandleID="k8s-pod-network.8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.882 [INFO][5707] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" HandleID="k8s-pod-network.8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--cdd95-eth0" Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.883 [INFO][5707] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:58.889316 containerd[1821]: 2026-01-24 00:49:58.886 [INFO][5700] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e" Jan 24 00:49:58.889983 containerd[1821]: time="2026-01-24T00:49:58.889349396Z" level=info msg="TearDown network for sandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\" successfully" Jan 24 00:49:58.900673 containerd[1821]: time="2026-01-24T00:49:58.900377405Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:58.902972 containerd[1821]: time="2026-01-24T00:49:58.902063521Z" level=info msg="RemovePodSandbox \"8910e5d6d94317cd8e40cfb8f5d178b0654741f5c2a664a8097f997be6faff5e\" returns successfully" Jan 24 00:49:58.902972 containerd[1821]: time="2026-01-24T00:49:58.902625527Z" level=info msg="StopPodSandbox for \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\"" Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:58.957 [WARNING][5721] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0", GenerateName:"calico-apiserver-7d87ffcbb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"feb072f7-3316-4b11-9780-0976f355dc5e", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87ffcbb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7", Pod:"calico-apiserver-7d87ffcbb4-d45k5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia973dd7dbb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:58.957 [INFO][5721] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:58.957 [INFO][5721] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" iface="eth0" netns="" Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:58.957 [INFO][5721] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:58.957 [INFO][5721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:58.992 [INFO][5728] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" HandleID="k8s-pod-network.77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:58.992 [INFO][5728] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:58.992 [INFO][5728] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:58.999 [WARNING][5728] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" HandleID="k8s-pod-network.77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:58.999 [INFO][5728] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" HandleID="k8s-pod-network.77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:59.000 [INFO][5728] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:59.004941 containerd[1821]: 2026-01-24 00:49:59.002 [INFO][5721] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:59.006836 containerd[1821]: time="2026-01-24T00:49:59.005556641Z" level=info msg="TearDown network for sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\" successfully" Jan 24 00:49:59.006836 containerd[1821]: time="2026-01-24T00:49:59.005609641Z" level=info msg="StopPodSandbox for \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\" returns successfully" Jan 24 00:49:59.007422 containerd[1821]: time="2026-01-24T00:49:59.007373859Z" level=info msg="RemovePodSandbox for \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\"" Jan 24 00:49:59.007593 containerd[1821]: time="2026-01-24T00:49:59.007406959Z" level=info msg="Forcibly stopping sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\"" Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.054 [WARNING][5743] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0", GenerateName:"calico-apiserver-7d87ffcbb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"feb072f7-3316-4b11-9780-0976f355dc5e", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87ffcbb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"184a7f4cb1ad730302750ea378c0c4eb9c419b9c5cdc635277e780502417ddc7", Pod:"calico-apiserver-7d87ffcbb4-d45k5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia973dd7dbb0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.054 [INFO][5743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.054 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" iface="eth0" netns="" Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.054 [INFO][5743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.054 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.086 [INFO][5751] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" HandleID="k8s-pod-network.77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.087 [INFO][5751] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.087 [INFO][5751] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.094 [WARNING][5751] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" HandleID="k8s-pod-network.77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.094 [INFO][5751] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" HandleID="k8s-pod-network.77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--d45k5-eth0" Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.096 [INFO][5751] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:59.100036 containerd[1821]: 2026-01-24 00:49:59.097 [INFO][5743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9" Jan 24 00:49:59.101524 containerd[1821]: time="2026-01-24T00:49:59.099991471Z" level=info msg="TearDown network for sandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\" successfully" Jan 24 00:49:59.110253 containerd[1821]: time="2026-01-24T00:49:59.110167671Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:59.110358 containerd[1821]: time="2026-01-24T00:49:59.110293572Z" level=info msg="RemovePodSandbox \"77150c8cf48413a59a58a3ab05915eb05179a0cf6af9b14697d7984cd539b9b9\" returns successfully" Jan 24 00:49:59.111260 containerd[1821]: time="2026-01-24T00:49:59.111229382Z" level=info msg="StopPodSandbox for \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\"" Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.162 [WARNING][5765] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52d6adb2-e5fc-4ea6-8c92-021d49b0142f", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e", Pod:"csi-node-driver-w66c2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3567a7efd55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.162 [INFO][5765] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.162 [INFO][5765] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" iface="eth0" netns="" Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.162 [INFO][5765] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.162 [INFO][5765] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.199 [INFO][5773] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" HandleID="k8s-pod-network.260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.200 [INFO][5773] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.201 [INFO][5773] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.209 [WARNING][5773] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" HandleID="k8s-pod-network.260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.209 [INFO][5773] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" HandleID="k8s-pod-network.260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.213 [INFO][5773] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:59.217906 containerd[1821]: 2026-01-24 00:49:59.215 [INFO][5765] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:59.218495 containerd[1821]: time="2026-01-24T00:49:59.218032133Z" level=info msg="TearDown network for sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\" successfully" Jan 24 00:49:59.218495 containerd[1821]: time="2026-01-24T00:49:59.218078034Z" level=info msg="StopPodSandbox for \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\" returns successfully" Jan 24 00:49:59.219067 containerd[1821]: time="2026-01-24T00:49:59.219033143Z" level=info msg="RemovePodSandbox for \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\"" Jan 24 00:49:59.219067 containerd[1821]: time="2026-01-24T00:49:59.219065344Z" level=info msg="Forcibly stopping sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\"" Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.270 [WARNING][5788] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52d6adb2-e5fc-4ea6-8c92-021d49b0142f", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"8a62900abb347f053623a48b62967dde6da907c8156fc97863482033dff8f94e", Pod:"csi-node-driver-w66c2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3567a7efd55", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.270 [INFO][5788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.270 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" iface="eth0" netns="" Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.270 [INFO][5788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.270 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.295 [INFO][5795] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" HandleID="k8s-pod-network.260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.295 [INFO][5795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.295 [INFO][5795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.301 [WARNING][5795] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" HandleID="k8s-pod-network.260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.301 [INFO][5795] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" HandleID="k8s-pod-network.260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Workload="ci--4081.3.6--n--d923855e69-k8s-csi--node--driver--w66c2-eth0" Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.303 [INFO][5795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:59.306219 containerd[1821]: 2026-01-24 00:49:59.304 [INFO][5788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b" Jan 24 00:49:59.306918 containerd[1821]: time="2026-01-24T00:49:59.306275803Z" level=info msg="TearDown network for sandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\" successfully" Jan 24 00:49:59.313380 containerd[1821]: time="2026-01-24T00:49:59.313340272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:59.314109 containerd[1821]: time="2026-01-24T00:49:59.313400073Z" level=info msg="RemovePodSandbox \"260c770b809197582a107c9c46d3cac97f0b862bf3276c4f735b2854041adc4b\" returns successfully" Jan 24 00:49:59.314109 containerd[1821]: time="2026-01-24T00:49:59.313936578Z" level=info msg="StopPodSandbox for \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\"" Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.356 [WARNING][5809] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0", GenerateName:"calico-apiserver-7d87ffcbb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce17dab6-c6ae-4d47-91e5-8ead47b1af74", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87ffcbb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a", Pod:"calico-apiserver-7d87ffcbb4-hrz4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali258bfe9ee26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.356 [INFO][5809] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.356 [INFO][5809] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" iface="eth0" netns="" Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.356 [INFO][5809] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.356 [INFO][5809] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.381 [INFO][5816] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" HandleID="k8s-pod-network.9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.381 [INFO][5816] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.382 [INFO][5816] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.387 [WARNING][5816] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" HandleID="k8s-pod-network.9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.387 [INFO][5816] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" HandleID="k8s-pod-network.9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.388 [INFO][5816] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:59.392682 containerd[1821]: 2026-01-24 00:49:59.390 [INFO][5809] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:59.392682 containerd[1821]: time="2026-01-24T00:49:59.392218249Z" level=info msg="TearDown network for sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\" successfully" Jan 24 00:49:59.392682 containerd[1821]: time="2026-01-24T00:49:59.392251249Z" level=info msg="StopPodSandbox for \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\" returns successfully" Jan 24 00:49:59.393438 containerd[1821]: time="2026-01-24T00:49:59.393072858Z" level=info msg="RemovePodSandbox for \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\"" Jan 24 00:49:59.393438 containerd[1821]: time="2026-01-24T00:49:59.393105458Z" level=info msg="Forcibly stopping sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\"" Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.425 [WARNING][5830] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0", GenerateName:"calico-apiserver-7d87ffcbb4-", Namespace:"calico-apiserver", SelfLink:"", UID:"ce17dab6-c6ae-4d47-91e5-8ead47b1af74", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d87ffcbb4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"ae1f4b7a6a47d5f534bdc63b8a51adf74904b6cb4514d6d414059bffe4980f5a", Pod:"calico-apiserver-7d87ffcbb4-hrz4c", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali258bfe9ee26", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.425 [INFO][5830] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.425 [INFO][5830] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" iface="eth0" netns="" Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.425 [INFO][5830] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.425 [INFO][5830] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.449 [INFO][5837] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" HandleID="k8s-pod-network.9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.450 [INFO][5837] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.450 [INFO][5837] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.457 [WARNING][5837] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" HandleID="k8s-pod-network.9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.457 [INFO][5837] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" HandleID="k8s-pod-network.9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Workload="ci--4081.3.6--n--d923855e69-k8s-calico--apiserver--7d87ffcbb4--hrz4c-eth0" Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.459 [INFO][5837] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:59.462729 containerd[1821]: 2026-01-24 00:49:59.460 [INFO][5830] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83" Jan 24 00:49:59.464492 containerd[1821]: time="2026-01-24T00:49:59.462863745Z" level=info msg="TearDown network for sandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\" successfully" Jan 24 00:49:59.477506 containerd[1821]: time="2026-01-24T00:49:59.477382688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:59.477703 containerd[1821]: time="2026-01-24T00:49:59.477678391Z" level=info msg="RemovePodSandbox \"9f0f2837b8ce67e1afb87a58b48fb88c8b551b8e25f879abef18ad99871fee83\" returns successfully" Jan 24 00:49:59.478880 containerd[1821]: time="2026-01-24T00:49:59.478852402Z" level=info msg="StopPodSandbox for \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\"" Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.523 [WARNING][5852] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-whisker--7bd74fcf67--tzlpc-eth0" Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.524 [INFO][5852] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.524 [INFO][5852] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" iface="eth0" netns="" Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.524 [INFO][5852] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.524 [INFO][5852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.552 [INFO][5861] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" HandleID="k8s-pod-network.ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--7bd74fcf67--tzlpc-eth0" Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.552 [INFO][5861] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.553 [INFO][5861] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.579 [WARNING][5861] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" HandleID="k8s-pod-network.ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--7bd74fcf67--tzlpc-eth0" Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.579 [INFO][5861] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" HandleID="k8s-pod-network.ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--7bd74fcf67--tzlpc-eth0" Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.586 [INFO][5861] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:59.599813 containerd[1821]: 2026-01-24 00:49:59.596 [INFO][5852] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:59.599813 containerd[1821]: time="2026-01-24T00:49:59.598522781Z" level=info msg="TearDown network for sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\" successfully" Jan 24 00:49:59.599813 containerd[1821]: time="2026-01-24T00:49:59.598562482Z" level=info msg="StopPodSandbox for \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\" returns successfully" Jan 24 00:49:59.608936 containerd[1821]: time="2026-01-24T00:49:59.605166147Z" level=info msg="RemovePodSandbox for \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\"" Jan 24 00:49:59.608936 containerd[1821]: time="2026-01-24T00:49:59.605205247Z" level=info msg="Forcibly stopping sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\"" Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.739 [WARNING][5875] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" WorkloadEndpoint="ci--4081.3.6--n--d923855e69-k8s-whisker--7bd74fcf67--tzlpc-eth0" Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.739 [INFO][5875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.739 [INFO][5875] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" iface="eth0" netns="" Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.739 [INFO][5875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.739 [INFO][5875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.781 [INFO][5882] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" HandleID="k8s-pod-network.ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--7bd74fcf67--tzlpc-eth0" Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.782 [INFO][5882] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.782 [INFO][5882] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.790 [WARNING][5882] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" HandleID="k8s-pod-network.ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--7bd74fcf67--tzlpc-eth0" Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.790 [INFO][5882] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" HandleID="k8s-pod-network.ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Workload="ci--4081.3.6--n--d923855e69-k8s-whisker--7bd74fcf67--tzlpc-eth0" Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.791 [INFO][5882] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:59.796847 containerd[1821]: 2026-01-24 00:49:59.793 [INFO][5875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39" Jan 24 00:49:59.796847 containerd[1821]: time="2026-01-24T00:49:59.795208818Z" level=info msg="TearDown network for sandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\" successfully" Jan 24 00:49:59.803992 containerd[1821]: time="2026-01-24T00:49:59.803759703Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:49:59.803992 containerd[1821]: time="2026-01-24T00:49:59.803842304Z" level=info msg="RemovePodSandbox \"ec560900a7c3b009f408fd64bea35db21c980e97f039fb11ffe2ad342cd6ed39\" returns successfully" Jan 24 00:49:59.805938 containerd[1821]: time="2026-01-24T00:49:59.804394109Z" level=info msg="StopPodSandbox for \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\"" Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.854 [WARNING][5896] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6da5a353-0459-4899-8898-8a79910e38eb", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e", Pod:"goldmane-666569f655-7pw49", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0bf9bc61cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.855 [INFO][5896] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.855 [INFO][5896] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" iface="eth0" netns="" Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.855 [INFO][5896] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.855 [INFO][5896] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.886 [INFO][5904] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" HandleID="k8s-pod-network.9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.887 [INFO][5904] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.887 [INFO][5904] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.897 [WARNING][5904] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" HandleID="k8s-pod-network.9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.897 [INFO][5904] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" HandleID="k8s-pod-network.9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.899 [INFO][5904] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:49:59.906295 containerd[1821]: 2026-01-24 00:49:59.904 [INFO][5896] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:49:59.911997 containerd[1821]: time="2026-01-24T00:49:59.906362613Z" level=info msg="TearDown network for sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\" successfully" Jan 24 00:49:59.911997 containerd[1821]: time="2026-01-24T00:49:59.906409914Z" level=info msg="StopPodSandbox for \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\" returns successfully" Jan 24 00:49:59.911997 containerd[1821]: time="2026-01-24T00:49:59.908163331Z" level=info msg="RemovePodSandbox for \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\"" Jan 24 00:49:59.911997 containerd[1821]: time="2026-01-24T00:49:59.908194631Z" level=info msg="Forcibly stopping sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\"" Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:49:59.958 [WARNING][5918] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"6da5a353-0459-4899-8898-8a79910e38eb", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"fbf9b7f21905a4932263ee848ba8ee908f25a6e4411f494e3425e3d56e40432e", Pod:"goldmane-666569f655-7pw49", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calif0bf9bc61cf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:49:59.958 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:49:59.959 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" iface="eth0" netns="" Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:49:59.959 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:49:59.959 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:49:59.995 [INFO][5925] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" HandleID="k8s-pod-network.9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:49:59.996 [INFO][5925] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:49:59.996 [INFO][5925] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:50:00.010 [WARNING][5925] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" HandleID="k8s-pod-network.9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:50:00.010 [INFO][5925] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" HandleID="k8s-pod-network.9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Workload="ci--4081.3.6--n--d923855e69-k8s-goldmane--666569f655--7pw49-eth0" Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:50:00.014 [INFO][5925] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:50:00.018857 containerd[1821]: 2026-01-24 00:50:00.016 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06" Jan 24 00:50:00.019415 containerd[1821]: time="2026-01-24T00:50:00.019380626Z" level=info msg="TearDown network for sandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\" successfully" Jan 24 00:50:00.039698 containerd[1821]: time="2026-01-24T00:50:00.039438024Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:50:00.039698 containerd[1821]: time="2026-01-24T00:50:00.039535625Z" level=info msg="RemovePodSandbox \"9e45698a48ca2a0f85fbb56675f1383dd33618d918cbfbc0468ee47c1b42bd06\" returns successfully" Jan 24 00:50:00.040139 containerd[1821]: time="2026-01-24T00:50:00.040108531Z" level=info msg="StopPodSandbox for \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\"" Jan 24 00:50:00.305080 containerd[1821]: time="2026-01-24T00:50:00.305032940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.096 [WARNING][5940] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ff942fbf-7f43-4ef8-9f92-2e10cfb795ba", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616", Pod:"coredns-668d6bf9bc-rznsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18bb639d94f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.096 [INFO][5940] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.096 [INFO][5940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" iface="eth0" netns="" Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.096 [INFO][5940] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.096 [INFO][5940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.150 [INFO][5947] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" HandleID="k8s-pod-network.dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.150 [INFO][5947] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.150 [INFO][5947] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.328 [WARNING][5947] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" HandleID="k8s-pod-network.dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.328 [INFO][5947] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" HandleID="k8s-pod-network.dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.340 [INFO][5947] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:50:00.346535 containerd[1821]: 2026-01-24 00:50:00.344 [INFO][5940] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:50:00.350454 containerd[1821]: time="2026-01-24T00:50:00.346577649Z" level=info msg="TearDown network for sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\" successfully" Jan 24 00:50:00.350454 containerd[1821]: time="2026-01-24T00:50:00.346608650Z" level=info msg="StopPodSandbox for \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\" returns successfully" Jan 24 00:50:00.350454 containerd[1821]: time="2026-01-24T00:50:00.347115355Z" level=info msg="RemovePodSandbox for \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\"" Jan 24 00:50:00.350454 containerd[1821]: time="2026-01-24T00:50:00.347152155Z" level=info msg="Forcibly stopping sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\"" Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.393 [WARNING][5961] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ff942fbf-7f43-4ef8-9f92-2e10cfb795ba", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 49, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-d923855e69", ContainerID:"887cc117eae12d4c437a00be245005e1a5b86e91886f7be4006aa1916323f616", Pod:"coredns-668d6bf9bc-rznsh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali18bb639d94f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.393 [INFO][5961] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.393 [INFO][5961] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" iface="eth0" netns="" Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.394 [INFO][5961] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.394 [INFO][5961] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.430 [INFO][5968] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" HandleID="k8s-pod-network.dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.430 [INFO][5968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.430 [INFO][5968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.437 [WARNING][5968] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" HandleID="k8s-pod-network.dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.438 [INFO][5968] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" HandleID="k8s-pod-network.dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Workload="ci--4081.3.6--n--d923855e69-k8s-coredns--668d6bf9bc--rznsh-eth0" Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.440 [INFO][5968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:50:00.443861 containerd[1821]: 2026-01-24 00:50:00.442 [INFO][5961] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9" Jan 24 00:50:00.444627 containerd[1821]: time="2026-01-24T00:50:00.444009909Z" level=info msg="TearDown network for sandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\" successfully" Jan 24 00:50:00.460057 containerd[1821]: time="2026-01-24T00:50:00.459851165Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:50:00.460057 containerd[1821]: time="2026-01-24T00:50:00.459935866Z" level=info msg="RemovePodSandbox \"dbe4251f5c4ca32db8ec011add20b5d189a36750bab6cf61ed4e3ea5a58bdfb9\" returns successfully" Jan 24 00:50:00.594944 containerd[1821]: time="2026-01-24T00:50:00.594637693Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:00.598711 containerd[1821]: time="2026-01-24T00:50:00.598645032Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:50:00.598998 containerd[1821]: time="2026-01-24T00:50:00.598812734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:50:00.599170 kubelet[3388]: E0124 00:50:00.599119 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:00.599580 kubelet[3388]: E0124 00:50:00.599187 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:00.599961 kubelet[3388]: E0124 00:50:00.599673 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znj2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d87ffcbb4-hrz4c_calico-apiserver(ce17dab6-c6ae-4d47-91e5-8ead47b1af74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:00.600954 kubelet[3388]: E0124 00:50:00.600916 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:50:01.300392 containerd[1821]: time="2026-01-24T00:50:01.300130041Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:50:01.570044 containerd[1821]: time="2026-01-24T00:50:01.568471884Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:01.578508 containerd[1821]: time="2026-01-24T00:50:01.576654065Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:50:01.578508 containerd[1821]: time="2026-01-24T00:50:01.576716666Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:50:01.578679 kubelet[3388]: E0124 00:50:01.577855 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:01.578679 kubelet[3388]: E0124 00:50:01.577910 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:01.578679 kubelet[3388]: E0124 00:50:01.578061 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8jrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d87ffcbb4-d45k5_calico-apiserver(feb072f7-3316-4b11-9780-0976f355dc5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:01.579683 kubelet[3388]: E0124 00:50:01.579632 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:50:03.301666 containerd[1821]: time="2026-01-24T00:50:03.301440354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:50:03.596095 containerd[1821]: time="2026-01-24T00:50:03.595703452Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:03.598508 containerd[1821]: time="2026-01-24T00:50:03.598455579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:50:03.598639 containerd[1821]: time="2026-01-24T00:50:03.598555480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:50:03.599371 kubelet[3388]: E0124 00:50:03.598835 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:50:03.599371 kubelet[3388]: E0124 00:50:03.598921 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:50:03.599371 kubelet[3388]: E0124 00:50:03.599038 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7fq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66c2_calico-system(52d6adb2-e5fc-4ea6-8c92-021d49b0142f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:03.601480 containerd[1821]: time="2026-01-24T00:50:03.601232906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:50:03.868537 containerd[1821]: time="2026-01-24T00:50:03.868386138Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:03.873896 containerd[1821]: time="2026-01-24T00:50:03.873844392Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:50:03.874022 containerd[1821]: time="2026-01-24T00:50:03.873935893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:50:03.874167 kubelet[3388]: E0124 00:50:03.874122 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:50:03.874269 kubelet[3388]: E0124 00:50:03.874182 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:50:03.874686 kubelet[3388]: E0124 00:50:03.874350 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7fq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66c2_calico-system(52d6adb2-e5fc-4ea6-8c92-021d49b0142f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:03.875706 kubelet[3388]: E0124 00:50:03.875655 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:50:04.302051 containerd[1821]: time="2026-01-24T00:50:04.301235201Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:50:04.574886 containerd[1821]: time="2026-01-24T00:50:04.574710095Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:04.579913 containerd[1821]: time="2026-01-24T00:50:04.579850946Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:50:04.580076 containerd[1821]: time="2026-01-24T00:50:04.579982247Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:50:04.582785 kubelet[3388]: E0124 00:50:04.581069 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:50:04.582785 kubelet[3388]: E0124 00:50:04.581132 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:50:04.582785 kubelet[3388]: E0124 00:50:04.581293 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2n6cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c5bf95d6-rk94n_calico-system(37cb1a00-f2a5-4886-98cf-7e9aeba0026f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:04.583173 kubelet[3388]: E0124 00:50:04.583137 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:50:05.299330 containerd[1821]: time="2026-01-24T00:50:05.299271732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:50:05.564952 containerd[1821]: time="2026-01-24T00:50:05.564806547Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:05.567556 containerd[1821]: time="2026-01-24T00:50:05.567441373Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:50:05.567556 containerd[1821]: time="2026-01-24T00:50:05.567487973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:50:05.568111 kubelet[3388]: E0124 00:50:05.567686 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:50:05.568111 kubelet[3388]: E0124 00:50:05.567734 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:50:05.568111 kubelet[3388]: E0124 00:50:05.567891 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmzph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7pw49_calico-system(6da5a353-0459-4899-8898-8a79910e38eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:05.569668 kubelet[3388]: E0124 00:50:05.569620 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:50:09.305108 kubelet[3388]: E0124 00:50:09.304987 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:50:11.302883 kubelet[3388]: E0124 00:50:11.300349 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:50:13.299233 kubelet[3388]: E0124 00:50:13.299172 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:50:17.304230 kubelet[3388]: E0124 00:50:17.304171 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:50:17.304824 kubelet[3388]: E0124 00:50:17.304273 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:50:19.301677 kubelet[3388]: E0124 00:50:19.299956 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:50:22.307179 containerd[1821]: time="2026-01-24T00:50:22.307136203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:50:22.587809 containerd[1821]: time="2026-01-24T00:50:22.587492380Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:22.593155 containerd[1821]: time="2026-01-24T00:50:22.593104134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:50:22.593641 containerd[1821]: time="2026-01-24T00:50:22.593384236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:50:22.595381 kubelet[3388]: E0124 00:50:22.593856 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:50:22.595381 kubelet[3388]: E0124 00:50:22.593921 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:50:22.595381 kubelet[3388]: E0124 00:50:22.594057 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5d4d61cbab0549149b9649b46b6d3269,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4px27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55c747bbf5-8vn48_calico-system(cdf55fad-ab11-450d-ad4f-7c531f40d0f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:22.597210 containerd[1821]: time="2026-01-24T00:50:22.596993471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:50:23.027186 containerd[1821]: time="2026-01-24T00:50:23.027125478Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:23.034540 containerd[1821]: time="2026-01-24T00:50:23.033704041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:50:23.034540 containerd[1821]: time="2026-01-24T00:50:23.034139145Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:50:23.035840 kubelet[3388]: E0124 00:50:23.034453 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:50:23.035840 kubelet[3388]: E0124 00:50:23.034514 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:50:23.035840 kubelet[3388]: E0124 00:50:23.035210 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4px27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55c747bbf5-8vn48_calico-system(cdf55fad-ab11-450d-ad4f-7c531f40d0f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:23.036554 kubelet[3388]: E0124 00:50:23.036514 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:50:24.302334 containerd[1821]: time="2026-01-24T00:50:24.302288553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:50:24.594357 containerd[1821]: time="2026-01-24T00:50:24.594208641Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:24.597574 containerd[1821]: time="2026-01-24T00:50:24.597393371Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:50:24.597574 containerd[1821]: time="2026-01-24T00:50:24.597496172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:50:24.599813 kubelet[3388]: E0124 00:50:24.597707 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:24.599813 kubelet[3388]: E0124 00:50:24.597805 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:24.599813 kubelet[3388]: E0124 00:50:24.598074 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8jrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d87ffcbb4-d45k5_calico-apiserver(feb072f7-3316-4b11-9780-0976f355dc5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:24.599813 kubelet[3388]: E0124 00:50:24.599216 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:50:26.303169 containerd[1821]: time="2026-01-24T00:50:26.302914256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:50:26.563407 containerd[1821]: time="2026-01-24T00:50:26.562831137Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:26.566129 containerd[1821]: time="2026-01-24T00:50:26.566075868Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:50:26.566247 containerd[1821]: time="2026-01-24T00:50:26.566161469Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:50:26.566346 kubelet[3388]: E0124 00:50:26.566303 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:26.566760 kubelet[3388]: E0124 00:50:26.566356 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:50:26.566760 kubelet[3388]: E0124 00:50:26.566507 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znj2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d87ffcbb4-hrz4c_calico-apiserver(ce17dab6-c6ae-4d47-91e5-8ead47b1af74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:26.568053 kubelet[3388]: E0124 00:50:26.568014 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:50:28.306547 containerd[1821]: time="2026-01-24T00:50:28.306214784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:50:28.597835 containerd[1821]: time="2026-01-24T00:50:28.597636266Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:28.603106 containerd[1821]: time="2026-01-24T00:50:28.603025618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:50:28.603443 containerd[1821]: time="2026-01-24T00:50:28.603266420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:50:28.604372 kubelet[3388]: E0124 00:50:28.603645 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:50:28.604372 kubelet[3388]: E0124 00:50:28.603701 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:50:28.604372 kubelet[3388]: E0124 00:50:28.603891 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmzph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7pw49_calico-system(6da5a353-0459-4899-8898-8a79910e38eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:28.606101 kubelet[3388]: E0124 00:50:28.606042 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:50:29.299979 containerd[1821]: time="2026-01-24T00:50:29.299920772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:50:29.567621 containerd[1821]: time="2026-01-24T00:50:29.567481027Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:29.570092 containerd[1821]: time="2026-01-24T00:50:29.570040751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:50:29.570208 containerd[1821]: time="2026-01-24T00:50:29.570129152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:50:29.570369 kubelet[3388]: E0124 00:50:29.570323 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:50:29.570460 kubelet[3388]: E0124 00:50:29.570383 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:50:29.570878 kubelet[3388]: E0124 00:50:29.570570 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7fq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66c2_calico-system(52d6adb2-e5fc-4ea6-8c92-021d49b0142f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:29.572742 containerd[1821]: time="2026-01-24T00:50:29.572716877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:50:29.839907 containerd[1821]: time="2026-01-24T00:50:29.839612925Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:29.853061 containerd[1821]: time="2026-01-24T00:50:29.852884552Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:50:29.853061 containerd[1821]: time="2026-01-24T00:50:29.852996453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:50:29.854127 kubelet[3388]: E0124 00:50:29.853406 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:50:29.854127 kubelet[3388]: E0124 00:50:29.853468 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:50:29.854127 kubelet[3388]: E0124 00:50:29.853605 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7fq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66c2_calico-system(52d6adb2-e5fc-4ea6-8c92-021d49b0142f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:29.856788 kubelet[3388]: E0124 00:50:29.855879 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:50:32.304812 containerd[1821]: time="2026-01-24T00:50:32.304749625Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:50:32.575943 containerd[1821]: time="2026-01-24T00:50:32.575786072Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:50:32.579237 containerd[1821]: time="2026-01-24T00:50:32.579173905Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:50:32.579378 containerd[1821]: time="2026-01-24T00:50:32.579274206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:50:32.579512 kubelet[3388]: E0124 00:50:32.579450 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:50:32.580018 kubelet[3388]: E0124 00:50:32.579520 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:50:32.580018 kubelet[3388]: E0124 00:50:32.579709 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2n6cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c5bf95d6-rk94n_calico-system(37cb1a00-f2a5-4886-98cf-7e9aeba0026f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:50:32.581489 kubelet[3388]: E0124 00:50:32.581431 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:50:37.301973 kubelet[3388]: E0124 00:50:37.301910 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:50:37.303060 kubelet[3388]: E0124 00:50:37.302181 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:50:41.300718 kubelet[3388]: E0124 00:50:41.299750 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:50:43.300956 kubelet[3388]: E0124 00:50:43.300789 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:50:44.299788 kubelet[3388]: E0124 00:50:44.299254 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:50:48.299920 kubelet[3388]: E0124 00:50:48.299841 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:50:49.301779 kubelet[3388]: E0124 00:50:49.301628 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:50:52.300640 kubelet[3388]: E0124 00:50:52.300477 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:50:54.303401 kubelet[3388]: E0124 00:50:54.303345 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:50:56.302375 kubelet[3388]: E0124 00:50:56.302239 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:50:58.311424 kubelet[3388]: E0124 00:50:58.311277 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:51:03.299861 kubelet[3388]: E0124 00:51:03.299801 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:51:03.301197 containerd[1821]: time="2026-01-24T00:51:03.301156900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:51:03.575173 containerd[1821]: time="2026-01-24T00:51:03.575017070Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:03.578259 containerd[1821]: time="2026-01-24T00:51:03.578196701Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:51:03.578449 containerd[1821]: time="2026-01-24T00:51:03.578307302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:51:03.578582 kubelet[3388]: E0124 00:51:03.578515 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:51:03.578665 kubelet[3388]: E0124 00:51:03.578587 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:51:03.579129 kubelet[3388]: E0124 00:51:03.578755 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5d4d61cbab0549149b9649b46b6d3269,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4px27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55c747bbf5-8vn48_calico-system(cdf55fad-ab11-450d-ad4f-7c531f40d0f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:03.581621 containerd[1821]: time="2026-01-24T00:51:03.580795426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:51:03.849909 containerd[1821]: time="2026-01-24T00:51:03.849627547Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:03.852479 containerd[1821]: time="2026-01-24T00:51:03.852271973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:51:03.852479 containerd[1821]: time="2026-01-24T00:51:03.852377774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:51:03.854498 kubelet[3388]: E0124 00:51:03.853899 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:51:03.854498 kubelet[3388]: E0124 00:51:03.853963 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:51:03.854498 kubelet[3388]: E0124 00:51:03.854106 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4px27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55c747bbf5-8vn48_calico-system(cdf55fad-ab11-450d-ad4f-7c531f40d0f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:03.855677 kubelet[3388]: E0124 00:51:03.855638 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:51:06.300956 kubelet[3388]: E0124 00:51:06.300841 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:51:06.303942 containerd[1821]: time="2026-01-24T00:51:06.302652762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:51:06.563157 containerd[1821]: time="2026-01-24T00:51:06.562894799Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:06.565812 containerd[1821]: time="2026-01-24T00:51:06.565746627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:51:06.566008 containerd[1821]: time="2026-01-24T00:51:06.565780528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:51:06.566062 kubelet[3388]: E0124 00:51:06.565971 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:06.566062 kubelet[3388]: E0124 00:51:06.566028 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:06.566226 kubelet[3388]: E0124 00:51:06.566182 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8jrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d87ffcbb4-d45k5_calico-apiserver(feb072f7-3316-4b11-9780-0976f355dc5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:06.567801 kubelet[3388]: E0124 00:51:06.567698 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:51:09.300802 kubelet[3388]: E0124 00:51:09.300263 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:51:10.304043 containerd[1821]: time="2026-01-24T00:51:10.303929374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:51:10.575249 containerd[1821]: time="2026-01-24T00:51:10.574986031Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:10.578243 containerd[1821]: time="2026-01-24T00:51:10.578071061Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:51:10.578243 containerd[1821]: time="2026-01-24T00:51:10.578176562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:51:10.578416 kubelet[3388]: E0124 00:51:10.578342 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:51:10.578416 kubelet[3388]: E0124 00:51:10.578398 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:51:10.578904 kubelet[3388]: E0124 00:51:10.578565 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmzph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7pw49_calico-system(6da5a353-0459-4899-8898-8a79910e38eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:10.580399 kubelet[3388]: E0124 00:51:10.580288 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:51:14.091984 systemd[1]: Started sshd@7-10.200.4.5:22-10.200.16.10:34518.service - OpenSSH per-connection server daemon (10.200.16.10:34518). Jan 24 00:51:14.708539 sshd[6072]: Accepted publickey for core from 10.200.16.10 port 34518 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:14.710141 sshd[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:14.714893 systemd-logind[1792]: New session 10 of user core. Jan 24 00:51:14.723021 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:51:15.251544 sshd[6072]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:15.257509 systemd-logind[1792]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:51:15.259183 systemd[1]: sshd@7-10.200.4.5:22-10.200.16.10:34518.service: Deactivated successfully. Jan 24 00:51:15.268032 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:51:15.271621 systemd-logind[1792]: Removed session 10. Jan 24 00:51:17.299533 kubelet[3388]: E0124 00:51:17.299481 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:51:17.423475 waagent[2027]: 2026-01-24T00:51:17.423403Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 24 00:51:17.430447 waagent[2027]: 2026-01-24T00:51:17.430388Z INFO ExtHandler Jan 24 00:51:17.430580 waagent[2027]: 2026-01-24T00:51:17.430527Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3c60cc15-f0fc-4132-968d-ed42c81a1438 eTag: 7589322785681157542 source: Fabric] Jan 24 00:51:17.431151 waagent[2027]: 2026-01-24T00:51:17.430933Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 24 00:51:17.431635 waagent[2027]: 2026-01-24T00:51:17.431587Z INFO ExtHandler Jan 24 00:51:17.432088 waagent[2027]: 2026-01-24T00:51:17.431683Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 24 00:51:17.540839 waagent[2027]: 2026-01-24T00:51:17.540747Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 24 00:51:17.612092 waagent[2027]: 2026-01-24T00:51:17.611850Z INFO ExtHandler Downloaded certificate {'thumbprint': '0F6EFF8559C5899B27B07E7EECBAB077723E9FA6', 'hasPrivateKey': True} Jan 24 00:51:17.614788 waagent[2027]: 2026-01-24T00:51:17.612600Z INFO ExtHandler Fetch goal state completed Jan 24 00:51:17.614788 waagent[2027]: 2026-01-24T00:51:17.613130Z INFO ExtHandler ExtHandler Jan 24 00:51:17.614788 waagent[2027]: 2026-01-24T00:51:17.613219Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: f261889e-7e0a-49d6-a1d5-ea82a2d90dc7 correlation 7356c604-fcc6-4a4e-ac67-6471f6f1b51d created: 2026-01-24T00:51:08.793175Z] Jan 24 00:51:17.614788 waagent[2027]: 2026-01-24T00:51:17.613578Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 24 00:51:17.615047 waagent[2027]: 2026-01-24T00:51:17.614991Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Jan 24 00:51:18.306799 kubelet[3388]: E0124 00:51:18.306030 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:51:18.307385 containerd[1821]: time="2026-01-24T00:51:18.307086546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:51:18.585832 containerd[1821]: time="2026-01-24T00:51:18.585661165Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:18.590109 containerd[1821]: time="2026-01-24T00:51:18.589052100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:51:18.590109 containerd[1821]: time="2026-01-24T00:51:18.589083801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:51:18.590314 kubelet[3388]: E0124 00:51:18.589388 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:51:18.590314 kubelet[3388]: E0124 00:51:18.589441 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:51:18.590314 kubelet[3388]: E0124 00:51:18.589704 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2n6cd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-c5bf95d6-rk94n_calico-system(37cb1a00-f2a5-4886-98cf-7e9aeba0026f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:18.590576 containerd[1821]: time="2026-01-24T00:51:18.590484915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:51:18.591291 kubelet[3388]: E0124 00:51:18.591250 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:51:18.857197 containerd[1821]: time="2026-01-24T00:51:18.857047409Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:18.860286 containerd[1821]: time="2026-01-24T00:51:18.860240542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:51:18.861161 containerd[1821]: time="2026-01-24T00:51:18.860337343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:51:18.861248 kubelet[3388]: E0124 00:51:18.860525 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:18.861248 kubelet[3388]: E0124 00:51:18.860570 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:51:18.861248 kubelet[3388]: E0124 00:51:18.860688 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znj2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d87ffcbb4-hrz4c_calico-apiserver(ce17dab6-c6ae-4d47-91e5-8ead47b1af74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:18.862262 kubelet[3388]: E0124 00:51:18.862192 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:51:20.366311 systemd[1]: Started sshd@8-10.200.4.5:22-10.200.16.10:57100.service - OpenSSH per-connection server daemon (10.200.16.10:57100). Jan 24 00:51:21.133795 sshd[6114]: Accepted publickey for core from 10.200.16.10 port 57100 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:21.135424 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:21.141072 systemd-logind[1792]: New session 11 of user core. Jan 24 00:51:21.148331 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:51:21.783236 sshd[6114]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:21.790202 systemd[1]: sshd@8-10.200.4.5:22-10.200.16.10:57100.service: Deactivated successfully. Jan 24 00:51:21.796859 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:51:21.799622 systemd-logind[1792]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:51:21.801691 systemd-logind[1792]: Removed session 11. Jan 24 00:51:23.304273 containerd[1821]: time="2026-01-24T00:51:23.303978908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:51:23.580249 containerd[1821]: time="2026-01-24T00:51:23.580082901Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:23.592712 containerd[1821]: time="2026-01-24T00:51:23.592594732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:51:23.592712 containerd[1821]: time="2026-01-24T00:51:23.592654833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:51:23.594965 kubelet[3388]: E0124 00:51:23.593040 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:51:23.594965 kubelet[3388]: E0124 00:51:23.593273 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:51:23.594965 kubelet[3388]: E0124 00:51:23.593591 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7fq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66c2_calico-system(52d6adb2-e5fc-4ea6-8c92-021d49b0142f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:23.596612 containerd[1821]: time="2026-01-24T00:51:23.596578574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:51:23.858948 containerd[1821]: time="2026-01-24T00:51:23.858801722Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:51:23.862755 containerd[1821]: time="2026-01-24T00:51:23.862702162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:51:23.862755 containerd[1821]: time="2026-01-24T00:51:23.862791963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:51:23.863014 kubelet[3388]: E0124 00:51:23.862949 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:51:23.863079 kubelet[3388]: E0124 00:51:23.863016 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:51:23.863228 kubelet[3388]: E0124 00:51:23.863178 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t7fq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-w66c2_calico-system(52d6adb2-e5fc-4ea6-8c92-021d49b0142f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:51:23.864461 kubelet[3388]: E0124 00:51:23.864419 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:51:26.304042 kubelet[3388]: E0124 00:51:26.303991 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:51:26.885343 systemd[1]: Started sshd@9-10.200.4.5:22-10.200.16.10:57114.service - OpenSSH per-connection server daemon (10.200.16.10:57114). Jan 24 00:51:27.488993 sshd[6129]: Accepted publickey for core from 10.200.16.10 port 57114 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:27.491564 sshd[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:27.499445 systemd-logind[1792]: New session 12 of user core. Jan 24 00:51:27.502296 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:51:28.104615 sshd[6129]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:28.108132 systemd-logind[1792]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:51:28.110775 systemd[1]: sshd@9-10.200.4.5:22-10.200.16.10:57114.service: Deactivated successfully. Jan 24 00:51:28.120966 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:51:28.123999 systemd-logind[1792]: Removed session 12. Jan 24 00:51:29.301266 kubelet[3388]: E0124 00:51:29.300840 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:51:30.303804 kubelet[3388]: E0124 00:51:30.302488 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:51:33.215866 systemd[1]: Started sshd@10-10.200.4.5:22-10.200.16.10:40850.service - OpenSSH per-connection server daemon (10.200.16.10:40850). Jan 24 00:51:33.301916 kubelet[3388]: E0124 00:51:33.301865 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:51:33.302583 kubelet[3388]: E0124 00:51:33.302287 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:51:33.840487 sshd[6144]: Accepted publickey for core from 10.200.16.10 port 40850 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:33.843140 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:33.851102 systemd-logind[1792]: New session 13 of user core. Jan 24 00:51:33.857330 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:51:34.381563 sshd[6144]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:34.386942 systemd-logind[1792]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:51:34.388140 systemd[1]: sshd@10-10.200.4.5:22-10.200.16.10:40850.service: Deactivated successfully. Jan 24 00:51:34.398787 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:51:34.400876 systemd-logind[1792]: Removed session 13. Jan 24 00:51:34.493004 systemd[1]: Started sshd@11-10.200.4.5:22-10.200.16.10:40856.service - OpenSSH per-connection server daemon (10.200.16.10:40856). Jan 24 00:51:35.123119 sshd[6161]: Accepted publickey for core from 10.200.16.10 port 40856 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:35.124626 sshd[6161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:35.128819 systemd-logind[1792]: New session 14 of user core. Jan 24 00:51:35.134891 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:51:35.651525 sshd[6161]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:35.659233 systemd[1]: sshd@11-10.200.4.5:22-10.200.16.10:40856.service: Deactivated successfully. Jan 24 00:51:35.667834 systemd-logind[1792]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:51:35.670274 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:51:35.677732 systemd-logind[1792]: Removed session 14. Jan 24 00:51:35.763108 systemd[1]: Started sshd@12-10.200.4.5:22-10.200.16.10:40866.service - OpenSSH per-connection server daemon (10.200.16.10:40866). Jan 24 00:51:36.301023 kubelet[3388]: E0124 00:51:36.300804 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:51:36.398210 sshd[6172]: Accepted publickey for core from 10.200.16.10 port 40866 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:36.400800 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:36.409259 systemd-logind[1792]: New session 15 of user core. Jan 24 00:51:36.417095 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:51:36.894969 sshd[6172]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:36.897993 systemd[1]: sshd@12-10.200.4.5:22-10.200.16.10:40866.service: Deactivated successfully. Jan 24 00:51:36.904039 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:51:36.904991 systemd-logind[1792]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:51:36.905953 systemd-logind[1792]: Removed session 15. Jan 24 00:51:40.304820 kubelet[3388]: E0124 00:51:40.302050 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:51:42.001176 systemd[1]: Started sshd@13-10.200.4.5:22-10.200.16.10:45364.service - OpenSSH per-connection server daemon (10.200.16.10:45364). Jan 24 00:51:42.605134 sshd[6190]: Accepted publickey for core from 10.200.16.10 port 45364 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:42.607466 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:42.614995 systemd-logind[1792]: New session 16 of user core. Jan 24 00:51:42.621108 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:51:43.093176 sshd[6190]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:43.098437 systemd-logind[1792]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:51:43.099162 systemd[1]: sshd@13-10.200.4.5:22-10.200.16.10:45364.service: Deactivated successfully. Jan 24 00:51:43.106578 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:51:43.107996 systemd-logind[1792]: Removed session 16. Jan 24 00:51:43.299292 kubelet[3388]: E0124 00:51:43.299248 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:51:44.304958 kubelet[3388]: E0124 00:51:44.303850 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:51:45.300349 kubelet[3388]: E0124 00:51:45.300218 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:51:47.303804 kubelet[3388]: E0124 00:51:47.303676 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:51:48.204070 systemd[1]: Started sshd@14-10.200.4.5:22-10.200.16.10:45366.service - OpenSSH per-connection server daemon (10.200.16.10:45366). Jan 24 00:51:48.303508 kubelet[3388]: E0124 00:51:48.303383 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:51:48.808875 sshd[6224]: Accepted publickey for core from 10.200.16.10 port 45366 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:48.812086 sshd[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:48.823479 systemd-logind[1792]: New session 17 of user core. Jan 24 00:51:48.827378 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:51:49.371779 sshd[6224]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:49.376629 systemd[1]: sshd@14-10.200.4.5:22-10.200.16.10:45366.service: Deactivated successfully. Jan 24 00:51:49.381055 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:51:49.381211 systemd-logind[1792]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:51:49.382936 systemd-logind[1792]: Removed session 17. Jan 24 00:51:54.301784 kubelet[3388]: E0124 00:51:54.301714 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:51:54.479075 systemd[1]: Started sshd@15-10.200.4.5:22-10.200.16.10:48456.service - OpenSSH per-connection server daemon (10.200.16.10:48456). Jan 24 00:51:55.080820 sshd[6237]: Accepted publickey for core from 10.200.16.10 port 48456 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:51:55.082019 sshd[6237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:51:55.089153 systemd-logind[1792]: New session 18 of user core. Jan 24 00:51:55.096126 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:51:55.301898 kubelet[3388]: E0124 00:51:55.301826 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:51:55.608363 sshd[6237]: pam_unix(sshd:session): session closed for user core Jan 24 00:51:55.613713 systemd-logind[1792]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:51:55.614606 systemd[1]: sshd@15-10.200.4.5:22-10.200.16.10:48456.service: Deactivated successfully. Jan 24 00:51:55.626057 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:51:55.627282 systemd-logind[1792]: Removed session 18. Jan 24 00:51:56.299538 kubelet[3388]: E0124 00:51:56.299192 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:51:58.313795 kubelet[3388]: E0124 00:51:58.313240 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:51:59.299924 kubelet[3388]: E0124 00:51:59.299885 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:52:00.715179 systemd[1]: Started sshd@16-10.200.4.5:22-10.200.16.10:59608.service - OpenSSH per-connection server daemon (10.200.16.10:59608). Jan 24 00:52:01.303229 kubelet[3388]: E0124 00:52:01.303163 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:52:01.321790 sshd[6253]: Accepted publickey for core from 10.200.16.10 port 59608 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:01.322848 sshd[6253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:01.339399 systemd-logind[1792]: New session 19 of user core. Jan 24 00:52:01.343096 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:52:01.940379 sshd[6253]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:01.950992 systemd[1]: sshd@16-10.200.4.5:22-10.200.16.10:59608.service: Deactivated successfully. Jan 24 00:52:01.957269 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:52:01.958366 systemd-logind[1792]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:52:01.959695 systemd-logind[1792]: Removed session 19. Jan 24 00:52:02.048054 systemd[1]: Started sshd@17-10.200.4.5:22-10.200.16.10:59612.service - OpenSSH per-connection server daemon (10.200.16.10:59612). Jan 24 00:52:02.664272 sshd[6267]: Accepted publickey for core from 10.200.16.10 port 59612 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:02.666147 sshd[6267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:02.670499 systemd-logind[1792]: New session 20 of user core. Jan 24 00:52:02.676178 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:52:03.226497 sshd[6267]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:03.234890 systemd[1]: sshd@17-10.200.4.5:22-10.200.16.10:59612.service: Deactivated successfully. Jan 24 00:52:03.236340 systemd-logind[1792]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:52:03.248586 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:52:03.250845 systemd-logind[1792]: Removed session 20. Jan 24 00:52:03.338065 systemd[1]: Started sshd@18-10.200.4.5:22-10.200.16.10:59624.service - OpenSSH per-connection server daemon (10.200.16.10:59624). Jan 24 00:52:03.962418 sshd[6279]: Accepted publickey for core from 10.200.16.10 port 59624 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:03.967555 sshd[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:03.976677 systemd-logind[1792]: New session 21 of user core. Jan 24 00:52:03.983093 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:52:05.384521 sshd[6279]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:05.389717 systemd[1]: sshd@18-10.200.4.5:22-10.200.16.10:59624.service: Deactivated successfully. Jan 24 00:52:05.396091 systemd-logind[1792]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:52:05.397265 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:52:05.399921 systemd-logind[1792]: Removed session 21. Jan 24 00:52:05.490227 systemd[1]: Started sshd@19-10.200.4.5:22-10.200.16.10:59636.service - OpenSSH per-connection server daemon (10.200.16.10:59636). Jan 24 00:52:06.103087 sshd[6301]: Accepted publickey for core from 10.200.16.10 port 59636 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:06.105382 sshd[6301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:06.112822 systemd-logind[1792]: New session 22 of user core. Jan 24 00:52:06.117614 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:52:06.891300 sshd[6301]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:06.900648 systemd[1]: sshd@19-10.200.4.5:22-10.200.16.10:59636.service: Deactivated successfully. Jan 24 00:52:06.910349 systemd-logind[1792]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:52:06.911026 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:52:06.914623 systemd-logind[1792]: Removed session 22. Jan 24 00:52:06.996592 systemd[1]: Started sshd@20-10.200.4.5:22-10.200.16.10:59644.service - OpenSSH per-connection server daemon (10.200.16.10:59644). Jan 24 00:52:07.614809 sshd[6313]: Accepted publickey for core from 10.200.16.10 port 59644 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:07.617436 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:07.624306 systemd-logind[1792]: New session 23 of user core. Jan 24 00:52:07.631080 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:52:08.175500 sshd[6313]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:08.180246 systemd-logind[1792]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:52:08.181276 systemd[1]: sshd@20-10.200.4.5:22-10.200.16.10:59644.service: Deactivated successfully. Jan 24 00:52:08.186518 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:52:08.189268 systemd-logind[1792]: Removed session 23. Jan 24 00:52:09.301123 kubelet[3388]: E0124 00:52:09.299914 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:52:09.305626 kubelet[3388]: E0124 00:52:09.304973 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:52:10.302007 kubelet[3388]: E0124 00:52:10.301957 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:52:10.303949 kubelet[3388]: E0124 00:52:10.303720 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:52:10.306984 kubelet[3388]: E0124 00:52:10.304930 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:52:13.279116 systemd[1]: Started sshd@21-10.200.4.5:22-10.200.16.10:58336.service - OpenSSH per-connection server daemon (10.200.16.10:58336). Jan 24 00:52:13.879088 sshd[6329]: Accepted publickey for core from 10.200.16.10 port 58336 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:13.880658 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:13.885679 systemd-logind[1792]: New session 24 of user core. Jan 24 00:52:13.892028 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:52:14.422004 sshd[6329]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:14.430397 systemd[1]: sshd@21-10.200.4.5:22-10.200.16.10:58336.service: Deactivated successfully. Jan 24 00:52:14.438672 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:52:14.439830 systemd-logind[1792]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:52:14.442141 systemd-logind[1792]: Removed session 24. Jan 24 00:52:15.300218 kubelet[3388]: E0124 00:52:15.300167 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:52:19.526871 systemd[1]: Started sshd@22-10.200.4.5:22-10.200.16.10:45814.service - OpenSSH per-connection server daemon (10.200.16.10:45814). Jan 24 00:52:20.143917 sshd[6364]: Accepted publickey for core from 10.200.16.10 port 45814 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:20.145901 sshd[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:20.153714 systemd-logind[1792]: New session 25 of user core. Jan 24 00:52:20.164344 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:52:20.669019 sshd[6364]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:20.677194 systemd[1]: sshd@22-10.200.4.5:22-10.200.16.10:45814.service: Deactivated successfully. Jan 24 00:52:20.688542 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:52:20.690718 systemd-logind[1792]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:52:20.693322 systemd-logind[1792]: Removed session 25. Jan 24 00:52:22.308124 kubelet[3388]: E0124 00:52:22.308069 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:52:22.308721 kubelet[3388]: E0124 00:52:22.308556 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:52:22.308721 kubelet[3388]: E0124 00:52:22.308620 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:52:23.298802 kubelet[3388]: E0124 00:52:23.298713 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:52:25.301594 kubelet[3388]: E0124 00:52:25.301304 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:52:25.775176 systemd[1]: Started sshd@23-10.200.4.5:22-10.200.16.10:45830.service - OpenSSH per-connection server daemon (10.200.16.10:45830). Jan 24 00:52:26.386592 sshd[6384]: Accepted publickey for core from 10.200.16.10 port 45830 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:26.388733 sshd[6384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:26.402266 systemd-logind[1792]: New session 26 of user core. Jan 24 00:52:26.409751 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 00:52:26.914311 sshd[6384]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:26.919452 systemd-logind[1792]: Session 26 logged out. Waiting for processes to exit. Jan 24 00:52:26.920395 systemd[1]: sshd@23-10.200.4.5:22-10.200.16.10:45830.service: Deactivated successfully. Jan 24 00:52:26.929606 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 00:52:26.931077 systemd-logind[1792]: Removed session 26. Jan 24 00:52:28.300825 containerd[1821]: time="2026-01-24T00:52:28.300529181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:52:28.593437 containerd[1821]: time="2026-01-24T00:52:28.591123522Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:52:28.598345 containerd[1821]: time="2026-01-24T00:52:28.597400984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:52:28.598345 containerd[1821]: time="2026-01-24T00:52:28.597428584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:52:28.598549 kubelet[3388]: E0124 00:52:28.597726 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:52:28.598549 kubelet[3388]: E0124 00:52:28.597828 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:52:28.598549 kubelet[3388]: E0124 00:52:28.597988 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:5d4d61cbab0549149b9649b46b6d3269,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-4px27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55c747bbf5-8vn48_calico-system(cdf55fad-ab11-450d-ad4f-7c531f40d0f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:28.600546 containerd[1821]: time="2026-01-24T00:52:28.600305812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:52:28.870716 containerd[1821]: time="2026-01-24T00:52:28.870437953Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:52:28.873803 containerd[1821]: time="2026-01-24T00:52:28.873580184Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:52:28.873803 containerd[1821]: time="2026-01-24T00:52:28.873684885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:52:28.876783 kubelet[3388]: E0124 00:52:28.874115 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:52:28.876783 kubelet[3388]: E0124 00:52:28.874186 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:52:28.876783 kubelet[3388]: E0124 00:52:28.874324 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4px27,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-55c747bbf5-8vn48_calico-system(cdf55fad-ab11-450d-ad4f-7c531f40d0f4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:28.877396 kubelet[3388]: E0124 00:52:28.877174 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:52:32.029087 systemd[1]: Started sshd@24-10.200.4.5:22-10.200.16.10:40686.service - OpenSSH per-connection server daemon (10.200.16.10:40686). Jan 24 00:52:32.694785 sshd[6401]: Accepted publickey for core from 10.200.16.10 port 40686 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:32.700078 sshd[6401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:32.707075 systemd-logind[1792]: New session 27 of user core. Jan 24 00:52:32.717104 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 24 00:52:33.202003 sshd[6401]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:33.211440 systemd-logind[1792]: Session 27 logged out. Waiting for processes to exit. Jan 24 00:52:33.213138 systemd[1]: sshd@24-10.200.4.5:22-10.200.16.10:40686.service: Deactivated successfully. Jan 24 00:52:33.224638 systemd[1]: session-27.scope: Deactivated successfully. Jan 24 00:52:33.229608 systemd-logind[1792]: Removed session 27. Jan 24 00:52:34.303169 kubelet[3388]: E0124 00:52:34.302829 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74" Jan 24 00:52:34.304272 containerd[1821]: time="2026-01-24T00:52:34.303960031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:52:34.572058 containerd[1821]: time="2026-01-24T00:52:34.571911954Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:52:34.575073 containerd[1821]: time="2026-01-24T00:52:34.575016384Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:52:34.575190 containerd[1821]: time="2026-01-24T00:52:34.575107885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:52:34.575409 kubelet[3388]: E0124 00:52:34.575366 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:52:34.575497 kubelet[3388]: E0124 00:52:34.575424 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:52:34.575705 kubelet[3388]: E0124 00:52:34.575655 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8jrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d87ffcbb4-d45k5_calico-apiserver(feb072f7-3316-4b11-9780-0976f355dc5e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:34.576933 kubelet[3388]: E0124 00:52:34.576862 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-d45k5" podUID="feb072f7-3316-4b11-9780-0976f355dc5e" Jan 24 00:52:34.577082 containerd[1821]: time="2026-01-24T00:52:34.577021104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:52:34.840095 containerd[1821]: time="2026-01-24T00:52:34.839926678Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:52:34.843629 containerd[1821]: time="2026-01-24T00:52:34.843575414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:52:34.843757 containerd[1821]: time="2026-01-24T00:52:34.843638614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:52:34.846549 kubelet[3388]: E0124 00:52:34.845918 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:52:34.846549 kubelet[3388]: E0124 00:52:34.845982 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:52:34.846549 kubelet[3388]: E0124 00:52:34.846136 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmzph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-7pw49_calico-system(6da5a353-0459-4899-8898-8a79910e38eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:34.847683 kubelet[3388]: E0124 00:52:34.847624 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-7pw49" podUID="6da5a353-0459-4899-8898-8a79910e38eb" Jan 24 00:52:35.300356 kubelet[3388]: E0124 00:52:35.300303 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-c5bf95d6-rk94n" podUID="37cb1a00-f2a5-4886-98cf-7e9aeba0026f" Jan 24 00:52:38.309233 systemd[1]: Started sshd@25-10.200.4.5:22-10.200.16.10:40690.service - OpenSSH per-connection server daemon (10.200.16.10:40690). Jan 24 00:52:38.928072 sshd[6417]: Accepted publickey for core from 10.200.16.10 port 40690 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:38.928845 sshd[6417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:38.935603 systemd-logind[1792]: New session 28 of user core. Jan 24 00:52:38.940069 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 24 00:52:39.409986 sshd[6417]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:39.416667 systemd[1]: sshd@25-10.200.4.5:22-10.200.16.10:40690.service: Deactivated successfully. Jan 24 00:52:39.419832 systemd-logind[1792]: Session 28 logged out. Waiting for processes to exit. Jan 24 00:52:39.422642 systemd[1]: session-28.scope: Deactivated successfully. Jan 24 00:52:39.425809 systemd-logind[1792]: Removed session 28. Jan 24 00:52:40.302190 kubelet[3388]: E0124 00:52:40.302145 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-55c747bbf5-8vn48" podUID="cdf55fad-ab11-450d-ad4f-7c531f40d0f4" Jan 24 00:52:40.303451 kubelet[3388]: E0124 00:52:40.302909 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-w66c2" podUID="52d6adb2-e5fc-4ea6-8c92-021d49b0142f" Jan 24 00:52:44.523160 systemd[1]: Started sshd@26-10.200.4.5:22-10.200.16.10:49950.service - OpenSSH per-connection server daemon (10.200.16.10:49950). Jan 24 00:52:45.144793 sshd[6432]: Accepted publickey for core from 10.200.16.10 port 49950 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:52:45.146726 sshd[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:52:45.153040 systemd-logind[1792]: New session 29 of user core. Jan 24 00:52:45.160103 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 24 00:52:45.634193 sshd[6432]: pam_unix(sshd:session): session closed for user core Jan 24 00:52:45.639613 systemd[1]: sshd@26-10.200.4.5:22-10.200.16.10:49950.service: Deactivated successfully. Jan 24 00:52:45.648216 systemd[1]: session-29.scope: Deactivated successfully. Jan 24 00:52:45.649074 systemd-logind[1792]: Session 29 logged out. Waiting for processes to exit. Jan 24 00:52:45.650100 systemd-logind[1792]: Removed session 29. Jan 24 00:52:47.299585 containerd[1821]: time="2026-01-24T00:52:47.299412045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:52:47.563845 containerd[1821]: time="2026-01-24T00:52:47.562017919Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:52:47.567662 containerd[1821]: time="2026-01-24T00:52:47.567531673Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:52:47.567662 containerd[1821]: time="2026-01-24T00:52:47.567607574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:52:47.568249 kubelet[3388]: E0124 00:52:47.567889 3388 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:52:47.569389 kubelet[3388]: E0124 00:52:47.567952 3388 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:52:47.569389 kubelet[3388]: E0124 00:52:47.569140 3388 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-znj2f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7d87ffcbb4-hrz4c_calico-apiserver(ce17dab6-c6ae-4d47-91e5-8ead47b1af74): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:52:47.570919 kubelet[3388]: E0124 00:52:47.570825 3388 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7d87ffcbb4-hrz4c" podUID="ce17dab6-c6ae-4d47-91e5-8ead47b1af74"