Jan 17 00:28:52.175526 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:28:52.175573 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:28:52.175588 kernel: BIOS-provided physical RAM map: Jan 17 00:28:52.175599 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:28:52.175609 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 17 00:28:52.175620 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 17 00:28:52.175633 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 17 00:28:52.175648 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 17 00:28:52.175659 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 17 00:28:52.175669 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 17 00:28:52.175680 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 17 00:28:52.175690 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 17 00:28:52.175701 kernel: printk: bootconsole [earlyser0] enabled Jan 17 00:28:52.175712 kernel: NX (Execute Disable) protection: active Jan 17 00:28:52.175729 kernel: APIC: Static calls initialized Jan 17 00:28:52.175740 kernel: efi: EFI v2.7 by Microsoft Jan 17 00:28:52.175753 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Jan 17 00:28:52.175764 kernel: SMBIOS 3.1.0 present. Jan 17 00:28:52.175775 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 17 00:28:52.175787 kernel: Hypervisor detected: Microsoft Hyper-V Jan 17 00:28:52.175799 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 17 00:28:52.175813 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jan 17 00:28:52.175823 kernel: Hyper-V: Nested features: 0x1e0101 Jan 17 00:28:52.175833 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 17 00:28:52.175848 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 17 00:28:52.175860 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 00:28:52.175874 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 00:28:52.175888 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 17 00:28:52.175900 kernel: tsc: Detected 2593.907 MHz processor Jan 17 00:28:52.175913 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:28:52.175926 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:28:52.175937 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 17 00:28:52.175951 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:28:52.175970 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:28:52.175983 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 17 00:28:52.175996 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 17 00:28:52.176011 kernel: Using GB pages for direct mapping Jan 17 00:28:52.176024 kernel: Secure boot disabled Jan 17 00:28:52.176038 kernel: ACPI: Early table checksum verification disabled Jan 17 00:28:52.176050 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 17 00:28:52.176073 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176093 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176108 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 17 00:28:52.176120 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 17 00:28:52.176135 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176149 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176164 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176180 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176196 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176212 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176229 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176245 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 17 00:28:52.176257 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 17 00:28:52.176270 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 17 00:28:52.176282 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 17 00:28:52.176300 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 17 00:28:52.176328 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 17 00:28:52.176340 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 17 00:28:52.176354 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 17 00:28:52.176367 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 17 00:28:52.176379 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 17 00:28:52.176393 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:28:52.176405 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:28:52.176418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 17 00:28:52.176437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 17 00:28:52.176452 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 17 00:28:52.176466 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 17 00:28:52.176480 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 17 00:28:52.176494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 17 00:28:52.176508 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 17 00:28:52.176523 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 17 00:28:52.176537 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 17 00:28:52.176552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 17 00:28:52.176570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 17 00:28:52.176585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 17 00:28:52.176599 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 17 00:28:52.176613 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 17 00:28:52.176627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 17 00:28:52.176641 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 17 00:28:52.176655 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 17 00:28:52.176670 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 17 00:28:52.176684 kernel: Zone ranges: Jan 17 00:28:52.176703 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:28:52.176717 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:28:52.176730 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 00:28:52.176744 kernel: Movable zone start for each node Jan 17 00:28:52.176758 kernel: Early memory node ranges Jan 17 00:28:52.176772 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:28:52.176786 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 17 00:28:52.176800 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 17 00:28:52.176814 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 00:28:52.176832 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 17 00:28:52.176847 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:28:52.176861 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:28:52.176875 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 17 00:28:52.176889 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 17 00:28:52.176903 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 17 00:28:52.176917 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:28:52.176931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:28:52.176946 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:28:52.176963 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 17 00:28:52.176978 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:28:52.176992 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 17 00:28:52.177006 kernel: Booting paravirtualized kernel on Hyper-V Jan 17 00:28:52.177021 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:28:52.177035 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:28:52.177049 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:28:52.177063 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:28:52.177078 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:28:52.177095 kernel: Hyper-V: PV spinlocks enabled Jan 17 00:28:52.177110 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:28:52.177125 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:28:52.177140 kernel: random: crng init done Jan 17 00:28:52.177153 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:28:52.177168 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:28:52.177182 kernel: Fallback order for Node 0: 0 Jan 17 00:28:52.177197 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 17 00:28:52.177215 kernel: Policy zone: Normal Jan 17 00:28:52.177242 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:28:52.177257 kernel: software IO TLB: area num 2. Jan 17 00:28:52.177276 kernel: Memory: 8077080K/8387460K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 310120K reserved, 0K cma-reserved) Jan 17 00:28:52.177291 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:28:52.177306 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:28:52.177342 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:28:52.177357 kernel: Dynamic Preempt: voluntary Jan 17 00:28:52.177372 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:28:52.177389 kernel: rcu: RCU event tracing is enabled. Jan 17 00:28:52.177410 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:28:52.177426 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:28:52.177441 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:28:52.177456 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:28:52.177472 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:28:52.177488 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:28:52.177507 kernel: Using NULL legacy PIC Jan 17 00:28:52.177522 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 17 00:28:52.177538 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:28:52.177554 kernel: Console: colour dummy device 80x25 Jan 17 00:28:52.177568 kernel: printk: console [tty1] enabled Jan 17 00:28:52.177584 kernel: printk: console [ttyS0] enabled Jan 17 00:28:52.177599 kernel: printk: bootconsole [earlyser0] disabled Jan 17 00:28:52.177614 kernel: ACPI: Core revision 20230628 Jan 17 00:28:52.177629 kernel: Failed to register legacy timer interrupt Jan 17 00:28:52.177644 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:28:52.177663 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:28:52.177678 kernel: Hyper-V: Using IPI hypercalls Jan 17 00:28:52.177693 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 17 00:28:52.177709 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 17 00:28:52.177724 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 17 00:28:52.177739 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 17 00:28:52.177755 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 17 00:28:52.177770 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 17 00:28:52.177785 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 17 00:28:52.177804 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 00:28:52.177819 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 17 00:28:52.177834 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:28:52.177849 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:28:52.177864 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:28:52.177879 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 00:28:52.177894 kernel: RETBleed: Vulnerable Jan 17 00:28:52.177909 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:28:52.177924 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:28:52.177939 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:28:52.177958 kernel: active return thunk: its_return_thunk Jan 17 00:28:52.177972 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:28:52.177987 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:28:52.178002 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:28:52.178017 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:28:52.178032 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:28:52.178047 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:28:52.178062 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:28:52.178077 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:28:52.178091 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 17 00:28:52.178106 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 17 00:28:52.178125 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 17 00:28:52.178139 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 17 00:28:52.178154 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:28:52.178170 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:28:52.178184 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:28:52.178199 kernel: landlock: Up and running. Jan 17 00:28:52.178214 kernel: SELinux: Initializing. Jan 17 00:28:52.178230 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:28:52.178245 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:28:52.178261 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 00:28:52.178277 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:28:52.178297 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:28:52.178353 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:28:52.178365 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 00:28:52.178377 kernel: signal: max sigframe size: 3632 Jan 17 00:28:52.178389 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:28:52.178401 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:28:52.178413 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:28:52.178425 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:28:52.178437 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:28:52.178453 kernel: .... node #0, CPUs: #1 Jan 17 00:28:52.178465 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 17 00:28:52.178479 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:28:52.178490 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:28:52.178502 kernel: smpboot: Max logical packages: 1 Jan 17 00:28:52.178515 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 17 00:28:52.178526 kernel: devtmpfs: initialized Jan 17 00:28:52.178538 kernel: x86/mm: Memory block size: 128MB Jan 17 00:28:52.178554 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 17 00:28:52.178566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:28:52.178579 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:28:52.178591 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:28:52.178604 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:28:52.178617 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:28:52.178630 kernel: audit: type=2000 audit(1768609730.030:1): state=initialized audit_enabled=0 res=1 Jan 17 00:28:52.178642 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:28:52.178655 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:28:52.178671 kernel: cpuidle: using governor menu Jan 17 00:28:52.178684 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:28:52.178697 kernel: dca service started, version 1.12.1 Jan 17 00:28:52.178710 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 17 00:28:52.178724 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:28:52.178737 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:28:52.178750 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:28:52.178763 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:28:52.178777 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:28:52.178793 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:28:52.178807 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:28:52.178820 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:28:52.178833 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:28:52.178847 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:28:52.178860 kernel: ACPI: Interpreter enabled Jan 17 00:28:52.178874 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:28:52.178887 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:28:52.178901 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:28:52.178918 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:28:52.178932 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 17 00:28:52.178945 kernel: iommu: Default domain type: Translated Jan 17 00:28:52.178958 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:28:52.178972 kernel: efivars: Registered efivars operations Jan 17 00:28:52.178986 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:28:52.179000 kernel: PCI: System does not support PCI Jan 17 00:28:52.179013 kernel: vgaarb: loaded Jan 17 00:28:52.179027 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 17 00:28:52.179045 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:28:52.179058 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:28:52.179071 kernel: pnp: PnP ACPI init Jan 17 00:28:52.179085 kernel: pnp: PnP ACPI: found 3 devices Jan 17 00:28:52.179098 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:28:52.179112 kernel: NET: Registered PF_INET protocol family Jan 17 00:28:52.179126 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:28:52.179141 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:28:52.179155 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:28:52.179172 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:28:52.179187 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:28:52.179201 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:28:52.179215 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:28:52.179230 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:28:52.179244 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:28:52.179258 kernel: NET: Registered PF_XDP protocol family Jan 17 00:28:52.179272 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:28:52.179286 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:28:52.179304 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Jan 17 00:28:52.179340 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:28:52.179354 kernel: Initialise system trusted keyrings Jan 17 00:28:52.179368 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:28:52.179382 kernel: Key type asymmetric registered Jan 17 00:28:52.179395 kernel: Asymmetric key parser 'x509' registered Jan 17 00:28:52.179409 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:28:52.179423 kernel: io scheduler mq-deadline registered Jan 17 00:28:52.179438 kernel: io scheduler kyber registered Jan 17 00:28:52.179456 kernel: io scheduler bfq registered Jan 17 00:28:52.179470 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:28:52.179482 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:28:52.179495 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:28:52.179513 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:28:52.179530 kernel: i8042: PNP: No PS/2 controller found. Jan 17 00:28:52.179850 kernel: rtc_cmos 00:02: registered as rtc0 Jan 17 00:28:52.179988 kernel: rtc_cmos 00:02: setting system clock to 2026-01-17T00:28:51 UTC (1768609731) Jan 17 00:28:52.180108 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 17 00:28:52.180124 kernel: intel_pstate: CPU model not supported Jan 17 00:28:52.180138 kernel: efifb: probing for efifb Jan 17 00:28:52.180152 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:28:52.180166 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:28:52.180180 kernel: efifb: scrolling: redraw Jan 17 00:28:52.180194 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:28:52.180208 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:28:52.180222 kernel: fb0: EFI VGA frame buffer device Jan 17 00:28:52.180241 kernel: pstore: Using crash dump compression: deflate Jan 17 00:28:52.180255 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:28:52.180269 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:28:52.180283 kernel: Segment Routing with IPv6 Jan 17 00:28:52.180297 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:28:52.180328 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:28:52.180351 kernel: Key type dns_resolver registered Jan 17 00:28:52.180366 kernel: IPI shorthand broadcast: enabled Jan 17 00:28:52.180382 kernel: sched_clock: Marking stable (1214004400, 51876000)->(1512322900, -246442500) Jan 17 00:28:52.180401 kernel: registered taskstats version 1 Jan 17 00:28:52.180416 kernel: Loading compiled-in X.509 certificates Jan 17 00:28:52.180431 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:28:52.180446 kernel: Key type .fscrypt registered Jan 17 00:28:52.180461 kernel: Key type fscrypt-provisioning registered Jan 17 00:28:52.180476 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:28:52.180491 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:28:52.180506 kernel: ima: No architecture policies found Jan 17 00:28:52.180520 kernel: clk: Disabling unused clocks Jan 17 00:28:52.180539 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:28:52.180554 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:28:52.180569 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:28:52.180585 kernel: Run /init as init process Jan 17 00:28:52.180600 kernel: with arguments: Jan 17 00:28:52.180614 kernel: /init Jan 17 00:28:52.180629 kernel: with environment: Jan 17 00:28:52.180644 kernel: HOME=/ Jan 17 00:28:52.180659 kernel: TERM=linux Jan 17 00:28:52.180681 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:28:52.180700 systemd[1]: Detected virtualization microsoft. Jan 17 00:28:52.180716 systemd[1]: Detected architecture x86-64. Jan 17 00:28:52.180730 systemd[1]: Running in initrd. Jan 17 00:28:52.180746 systemd[1]: No hostname configured, using default hostname. Jan 17 00:28:52.180762 systemd[1]: Hostname set to . Jan 17 00:28:52.180778 systemd[1]: Initializing machine ID from random generator. Jan 17 00:28:52.180799 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:28:52.180815 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:28:52.180831 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:28:52.180848 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:28:52.180865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:28:52.180881 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:28:52.180897 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:28:52.180920 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:28:52.180937 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:28:52.180953 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:28:52.180969 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:28:52.180985 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:28:52.181001 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:28:52.181017 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:28:52.181034 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:28:52.181054 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:28:52.181070 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:28:52.181086 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:28:52.181104 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:28:52.181120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:28:52.181136 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:28:52.181152 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:28:52.181168 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:28:52.181184 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:28:52.181203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:28:52.181220 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:28:52.181235 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:28:52.181251 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:28:52.181267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:28:52.181334 systemd-journald[177]: Collecting audit messages is disabled. Jan 17 00:28:52.181377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:28:52.181396 systemd-journald[177]: Journal started Jan 17 00:28:52.181429 systemd-journald[177]: Runtime Journal (/run/log/journal/72f29538793f429b8fe04252055d9c92) is 8.0M, max 158.8M, 150.8M free. Jan 17 00:28:52.190344 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:28:52.194943 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:28:52.198820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:28:52.205848 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:28:52.216132 systemd-modules-load[178]: Inserted module 'overlay' Jan 17 00:28:52.225694 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:28:52.251985 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:28:52.253293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:28:52.277221 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:28:52.292603 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:28:52.284097 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:28:52.304351 kernel: Bridge firewalling registered Jan 17 00:28:52.304533 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 17 00:28:52.307811 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:28:52.311876 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:28:52.318507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:28:52.327542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:28:52.355080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:28:52.363083 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:28:52.367019 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:28:52.383614 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:28:52.402557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:28:52.413431 dracut-cmdline[212]: dracut-dracut-053 Jan 17 00:28:52.418035 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:28:52.478918 systemd-resolved[213]: Positive Trust Anchors: Jan 17 00:28:52.478942 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:28:52.478994 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:28:52.511596 systemd-resolved[213]: Defaulting to hostname 'linux'. Jan 17 00:28:52.516871 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:28:52.526604 kernel: SCSI subsystem initialized Jan 17 00:28:52.526942 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:28:52.543337 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:28:52.555346 kernel: iscsi: registered transport (tcp) Jan 17 00:28:52.578937 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:28:52.579084 kernel: QLogic iSCSI HBA Driver Jan 17 00:28:52.618676 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:28:52.627680 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:28:52.662270 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:28:52.662433 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:28:52.665946 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:28:52.710348 kernel: raid6: avx512x4 gen() 26739 MB/s Jan 17 00:28:52.729326 kernel: raid6: avx512x2 gen() 26633 MB/s Jan 17 00:28:52.748326 kernel: raid6: avx512x1 gen() 26440 MB/s Jan 17 00:28:52.768329 kernel: raid6: avx2x4 gen() 21757 MB/s Jan 17 00:28:52.787340 kernel: raid6: avx2x2 gen() 22107 MB/s Jan 17 00:28:52.807510 kernel: raid6: avx2x1 gen() 20598 MB/s Jan 17 00:28:52.807555 kernel: raid6: using algorithm avx512x4 gen() 26739 MB/s Jan 17 00:28:52.829213 kernel: raid6: .... xor() 5644 MB/s, rmw enabled Jan 17 00:28:52.829251 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:28:52.853348 kernel: xor: automatically using best checksumming function avx Jan 17 00:28:53.003345 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:28:53.014790 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:28:53.023730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:28:53.036995 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 17 00:28:53.041707 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:28:53.064552 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:28:53.079228 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 17 00:28:53.110926 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:28:53.122531 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:28:53.168436 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:28:53.180559 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:28:53.217549 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:28:53.230071 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:28:53.237604 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:28:53.244940 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:28:53.256628 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:28:53.271506 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:28:53.293675 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:28:53.293875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:28:53.305307 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:28:53.311793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:28:53.336340 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:28:53.336394 kernel: AES CTR mode by8 optimization enabled Jan 17 00:28:53.336414 kernel: hv_vmbus: Vmbus version:5.2 Jan 17 00:28:53.312087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:28:53.325863 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:28:53.343840 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:28:53.350483 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:28:53.371250 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:28:53.371329 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:28:53.394253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:28:53.400928 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:28:53.410008 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:28:53.425771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:28:53.438348 kernel: PTP clock support registered Jan 17 00:28:53.438414 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:28:53.447644 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 17 00:28:53.453335 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:28:53.463838 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:28:53.471757 kernel: scsi host1: storvsc_host_t Jan 17 00:28:53.471865 kernel: scsi host0: storvsc_host_t Jan 17 00:28:53.471888 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:28:53.478481 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:28:53.486826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:28:53.499556 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:28:53.499666 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:28:53.504663 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:28:53.504732 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 17 00:28:53.504775 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:28:53.510548 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:28:53.387426 systemd-resolved[213]: Clock change detected. Flushing caches. Jan 17 00:28:53.406386 systemd-journald[177]: Time jumped backwards, rotating. Jan 17 00:28:53.394995 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:28:53.414095 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:28:53.428507 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 17 00:28:53.429132 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:28:53.448186 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:28:53.448477 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:28:53.448493 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:28:53.448636 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:28:53.457012 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:28:53.457372 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:28:53.460203 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:28:53.460520 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:28:53.464816 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:28:53.470796 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:28:53.474772 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:28:53.582666 kernel: hv_netvsc 000d3ab3-5551-000d-3ab3-5551000d3ab3 eth0: VF slot 1 added Jan 17 00:28:53.591774 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:28:53.596787 kernel: hv_pci f425f880-96b3-4764-a3ae-c74c90fa0b75: PCI VMBus probing: Using version 0x10004 Jan 17 00:28:53.614790 kernel: hv_pci f425f880-96b3-4764-a3ae-c74c90fa0b75: PCI host bridge to bus 96b3:00 Jan 17 00:28:53.622838 kernel: pci_bus 96b3:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 17 00:28:53.623113 kernel: pci_bus 96b3:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:28:53.623231 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (449) Jan 17 00:28:53.644771 kernel: pci 96b3:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 17 00:28:53.644888 kernel: pci 96b3:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 00:28:53.644923 kernel: pci 96b3:00:02.0: enabling Extended Tags Jan 17 00:28:53.644955 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (446) Jan 17 00:28:53.660768 kernel: pci 96b3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 96b3:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 17 00:28:53.668794 kernel: pci_bus 96b3:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:28:53.678227 kernel: pci 96b3:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 00:28:53.672948 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:28:53.694162 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:28:53.712468 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:28:53.721811 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:28:53.739830 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:28:53.766072 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:28:53.788796 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:28:53.802336 kernel: GPT:disk_guids don't match. Jan 17 00:28:53.802429 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:28:53.802464 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:28:53.812764 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:28:53.993290 kernel: mlx5_core 96b3:00:02.0: enabling device (0000 -> 0002) Jan 17 00:28:53.997766 kernel: mlx5_core 96b3:00:02.0: firmware version: 14.30.5026 Jan 17 00:28:54.228765 kernel: hv_netvsc 000d3ab3-5551-000d-3ab3-5551000d3ab3 eth0: VF registering: eth1 Jan 17 00:28:54.240935 kernel: mlx5_core 96b3:00:02.0 eth1: joined to eth0 Jan 17 00:28:54.241266 kernel: mlx5_core 96b3:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 17 00:28:54.259775 kernel: mlx5_core 96b3:00:02.0 enP38579s1: renamed from eth1 Jan 17 00:28:54.814766 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:28:54.816140 disk-uuid[595]: The operation has completed successfully. Jan 17 00:28:54.908098 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:28:54.908232 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:28:54.939253 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:28:54.945890 sh[718]: Success Jan 17 00:28:54.963968 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:28:55.062675 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:28:55.076915 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:28:55.082456 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:28:55.110776 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:28:55.110844 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:28:55.117484 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:28:55.120451 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:28:55.123299 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:28:55.187952 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:28:55.194285 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:28:55.205038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:28:55.217979 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:28:55.237377 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:28:55.237474 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:28:55.239959 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:28:55.256935 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:28:55.274890 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:28:55.274973 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:28:55.289143 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:28:55.296088 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:28:55.345925 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:28:55.356198 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:28:55.395970 systemd-networkd[902]: lo: Link UP Jan 17 00:28:55.395982 systemd-networkd[902]: lo: Gained carrier Jan 17 00:28:55.398333 systemd-networkd[902]: Enumeration completed Jan 17 00:28:55.398537 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:28:55.401247 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:28:55.401252 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:28:55.402309 systemd[1]: Reached target network.target - Network. Jan 17 00:28:55.477783 kernel: mlx5_core 96b3:00:02.0 enP38579s1: Link up Jan 17 00:28:55.515285 kernel: hv_netvsc 000d3ab3-5551-000d-3ab3-5551000d3ab3 eth0: Data path switched to VF: enP38579s1 Jan 17 00:28:55.515660 systemd-networkd[902]: enP38579s1: Link UP Jan 17 00:28:55.515924 systemd-networkd[902]: eth0: Link UP Jan 17 00:28:55.516105 systemd-networkd[902]: eth0: Gained carrier Jan 17 00:28:55.516116 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:28:55.535819 systemd-networkd[902]: enP38579s1: Gained carrier Jan 17 00:28:55.561855 systemd-networkd[902]: eth0: DHCPv4 address 10.200.8.33/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 00:28:55.576005 ignition[837]: Ignition 2.19.0 Jan 17 00:28:55.576020 ignition[837]: Stage: fetch-offline Jan 17 00:28:55.578959 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:28:55.576081 ignition[837]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:55.576108 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:55.576267 ignition[837]: parsed url from cmdline: "" Jan 17 00:28:55.576273 ignition[837]: no config URL provided Jan 17 00:28:55.576281 ignition[837]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:28:55.576293 ignition[837]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:28:55.598152 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:28:55.576303 ignition[837]: failed to fetch config: resource requires networking Jan 17 00:28:55.576870 ignition[837]: Ignition finished successfully Jan 17 00:28:55.622858 ignition[911]: Ignition 2.19.0 Jan 17 00:28:55.622871 ignition[911]: Stage: fetch Jan 17 00:28:55.623140 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:55.623154 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:55.623281 ignition[911]: parsed url from cmdline: "" Jan 17 00:28:55.623286 ignition[911]: no config URL provided Jan 17 00:28:55.623293 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:28:55.623302 ignition[911]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:28:55.623326 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:28:55.706619 ignition[911]: GET result: OK Jan 17 00:28:55.706792 ignition[911]: config has been read from IMDS userdata Jan 17 00:28:55.706835 ignition[911]: parsing config with SHA512: 6ab3bb74c65cb72048ee98ad714aa3d9064a8525b0283225a624aef6967675c1eef3b66003c9f0e64be5f841cdd2ac2e5ac424585b4504fa15e29bc61903bd11 Jan 17 00:28:55.712944 unknown[911]: fetched base config from "system" Jan 17 00:28:55.712958 unknown[911]: fetched base config from "system" Jan 17 00:28:55.713479 ignition[911]: fetch: fetch complete Jan 17 00:28:55.712966 unknown[911]: fetched user config from "azure" Jan 17 00:28:55.713485 ignition[911]: fetch: fetch passed Jan 17 00:28:55.715441 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:28:55.713538 ignition[911]: Ignition finished successfully Jan 17 00:28:55.726234 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:28:55.745603 ignition[917]: Ignition 2.19.0 Jan 17 00:28:55.745617 ignition[917]: Stage: kargs Jan 17 00:28:55.749057 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:28:55.745910 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:55.745927 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:55.747117 ignition[917]: kargs: kargs passed Jan 17 00:28:55.763167 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:28:55.747177 ignition[917]: Ignition finished successfully Jan 17 00:28:55.783014 ignition[923]: Ignition 2.19.0 Jan 17 00:28:55.783028 ignition[923]: Stage: disks Jan 17 00:28:55.783292 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:55.783310 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:55.785109 ignition[923]: disks: disks passed Jan 17 00:28:55.785187 ignition[923]: Ignition finished successfully Jan 17 00:28:55.798078 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:28:55.803884 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:28:55.807538 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:28:55.816502 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:28:55.816637 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:28:55.817505 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:28:55.835208 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:28:55.869227 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:28:55.876049 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:28:55.892239 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:28:55.990768 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:28:55.991977 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:28:55.997298 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:28:56.014940 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:28:56.029792 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Jan 17 00:28:56.035780 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:28:56.031054 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:28:56.047775 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:28:56.047861 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:28:56.048840 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:28:56.058685 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:28:56.054637 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:28:56.058747 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:28:56.076315 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:28:56.081873 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:28:56.092017 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:28:56.254086 coreos-metadata[957]: Jan 17 00:28:56.253 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:28:56.262462 coreos-metadata[957]: Jan 17 00:28:56.262 INFO Fetch successful Jan 17 00:28:56.266083 coreos-metadata[957]: Jan 17 00:28:56.264 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:28:56.283283 coreos-metadata[957]: Jan 17 00:28:56.283 INFO Fetch successful Jan 17 00:28:56.286317 coreos-metadata[957]: Jan 17 00:28:56.285 INFO wrote hostname ci-4081.3.6-n-2e1a0c4804 to /sysroot/etc/hostname Jan 17 00:28:56.291048 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:28:56.288351 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:28:56.318637 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:28:56.325988 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:28:56.334773 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:28:56.615528 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:28:56.626070 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:28:56.633912 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:28:56.642653 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:28:56.649161 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:28:56.679453 ignition[1061]: INFO : Ignition 2.19.0 Jan 17 00:28:56.685862 ignition[1061]: INFO : Stage: mount Jan 17 00:28:56.685862 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:56.685862 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:56.685862 ignition[1061]: INFO : mount: mount passed Jan 17 00:28:56.685862 ignition[1061]: INFO : Ignition finished successfully Jan 17 00:28:56.687162 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:28:56.706945 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:28:56.714761 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:28:56.730202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:28:56.750769 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Jan 17 00:28:56.758400 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:28:56.758509 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:28:56.761251 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:28:56.768772 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:28:56.770865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:28:56.798292 ignition[1089]: INFO : Ignition 2.19.0 Jan 17 00:28:56.801239 ignition[1089]: INFO : Stage: files Jan 17 00:28:56.801239 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:56.801239 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:56.801239 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:28:56.813759 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:28:56.813759 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:28:56.830961 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:28:56.831507 unknown[1089]: wrote ssh authorized keys file for user: core Jan 17 00:28:56.895719 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:28:56.949227 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:28:57.341339 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:28:57.463397 systemd-networkd[902]: eth0: Gained IPv6LL Jan 17 00:28:57.660653 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:28:57.660653 ignition[1089]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 00:28:57.672934 ignition[1089]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:28:57.678979 ignition[1089]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:28:57.678979 ignition[1089]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 00:28:57.678979 ignition[1089]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 00:28:57.692150 ignition[1089]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:28:57.692150 ignition[1089]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:28:57.692150 ignition[1089]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 00:28:57.692150 ignition[1089]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:28:57.712646 ignition[1089]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:28:57.712646 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:28:57.712646 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:28:57.712646 ignition[1089]: INFO : files: files passed Jan 17 00:28:57.712646 ignition[1089]: INFO : Ignition finished successfully Jan 17 00:28:57.708907 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:28:57.736144 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:28:57.743227 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:28:57.754341 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:28:57.754478 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:28:57.768308 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:28:57.773847 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:28:57.784179 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:28:57.775492 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:28:57.792386 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:28:57.801109 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:28:57.845973 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:28:57.846128 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:28:57.849255 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:28:57.849383 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:28:57.850367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:28:57.852939 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:28:57.871630 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:28:57.889069 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:28:57.914388 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:28:57.917948 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:28:57.927516 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:28:57.932263 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:28:57.932470 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:28:57.939562 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:28:57.945234 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:28:57.951428 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:28:57.957941 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:28:57.964573 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:28:57.971129 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:28:57.979919 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:28:57.987007 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:28:57.990209 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:28:57.995699 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:28:58.001207 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:28:58.001392 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:28:58.009040 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:28:58.014842 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:28:58.021478 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:28:58.023986 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:28:58.030515 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:28:58.030703 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:28:58.046303 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:28:58.046545 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:28:58.056794 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:28:58.057042 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:28:58.061997 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:28:58.062185 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:28:58.082363 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:28:58.085263 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:28:58.087881 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:28:58.095311 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:28:58.104879 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:28:58.105213 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:28:58.109187 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:28:58.123311 ignition[1142]: INFO : Ignition 2.19.0 Jan 17 00:28:58.123311 ignition[1142]: INFO : Stage: umount Jan 17 00:28:58.123311 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:58.123311 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:58.123311 ignition[1142]: INFO : umount: umount passed Jan 17 00:28:58.123311 ignition[1142]: INFO : Ignition finished successfully Jan 17 00:28:58.109998 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:28:58.125921 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:28:58.126034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:28:58.135818 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:28:58.135953 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:28:58.142657 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:28:58.143057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:28:58.146402 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:28:58.146469 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:28:58.153582 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:28:58.153664 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:28:58.160527 systemd[1]: Stopped target network.target - Network. Jan 17 00:28:58.166137 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:28:58.168964 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:28:58.172600 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:28:58.177980 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:28:58.178065 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:28:58.184682 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:28:58.187362 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:28:58.192716 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:28:58.192811 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:28:58.198473 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:28:58.198544 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:28:58.204408 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:28:58.204499 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:28:58.211503 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:28:58.211576 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:28:58.217904 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:28:58.230851 systemd-networkd[902]: eth0: DHCPv6 lease lost Jan 17 00:28:58.232124 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:28:58.241351 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:28:58.242348 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:28:58.242484 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:28:58.248849 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:28:58.248985 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:28:58.254515 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:28:58.254614 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:28:58.287829 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:28:58.290821 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:28:58.290938 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:28:58.299927 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:28:58.300018 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:28:58.305907 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:28:58.305972 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:28:58.311828 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:28:58.311893 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:28:58.353304 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:28:58.372140 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:28:58.374725 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:28:58.382304 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:28:58.382414 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:28:58.391650 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:28:58.391719 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:28:58.400486 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:28:58.400596 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:28:58.408636 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:28:58.408732 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:28:58.424108 kernel: hv_netvsc 000d3ab3-5551-000d-3ab3-5551000d3ab3 eth0: Data path switched from VF: enP38579s1 Jan 17 00:28:58.416622 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:28:58.416700 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:28:58.431128 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:28:58.433950 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:28:58.434061 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:28:58.437153 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:28:58.437236 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:28:58.449452 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:28:58.449592 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:28:58.460692 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:28:58.460829 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:28:58.720654 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:28:58.720874 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:28:58.726931 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:28:58.732623 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:28:58.732733 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:28:58.748142 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:28:58.782925 systemd[1]: Switching root. Jan 17 00:28:58.804755 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 17 00:28:58.804891 systemd-journald[177]: Journal stopped Jan 17 00:28:52.175526 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:28:52.175573 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:28:52.175588 kernel: BIOS-provided physical RAM map: Jan 17 00:28:52.175599 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:28:52.175609 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 17 00:28:52.175620 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 17 00:28:52.175633 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 17 00:28:52.175648 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 17 00:28:52.175659 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 17 00:28:52.175669 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 17 00:28:52.175680 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 17 00:28:52.175690 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 17 00:28:52.175701 kernel: printk: bootconsole [earlyser0] enabled Jan 17 00:28:52.175712 kernel: NX (Execute Disable) protection: active Jan 17 00:28:52.175729 kernel: APIC: Static calls initialized Jan 17 00:28:52.175740 kernel: efi: EFI v2.7 by Microsoft Jan 17 00:28:52.175753 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Jan 17 00:28:52.175764 kernel: SMBIOS 3.1.0 present. Jan 17 00:28:52.175775 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 17 00:28:52.175787 kernel: Hypervisor detected: Microsoft Hyper-V Jan 17 00:28:52.175799 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 17 00:28:52.175813 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jan 17 00:28:52.175823 kernel: Hyper-V: Nested features: 0x1e0101 Jan 17 00:28:52.175833 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 17 00:28:52.175848 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 17 00:28:52.175860 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 00:28:52.175874 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 00:28:52.175888 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 17 00:28:52.175900 kernel: tsc: Detected 2593.907 MHz processor Jan 17 00:28:52.175913 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:28:52.175926 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:28:52.175937 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 17 00:28:52.175951 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:28:52.175970 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:28:52.175983 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 17 00:28:52.175996 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 17 00:28:52.176011 kernel: Using GB pages for direct mapping Jan 17 00:28:52.176024 kernel: Secure boot disabled Jan 17 00:28:52.176038 kernel: ACPI: Early table checksum verification disabled Jan 17 00:28:52.176050 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 17 00:28:52.176073 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176093 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176108 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 17 00:28:52.176120 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 17 00:28:52.176135 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176149 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176164 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176180 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176196 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176212 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176229 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:28:52.176245 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 17 00:28:52.176257 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 17 00:28:52.176270 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 17 00:28:52.176282 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 17 00:28:52.176300 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 17 00:28:52.176328 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 17 00:28:52.176340 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 17 00:28:52.176354 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 17 00:28:52.176367 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 17 00:28:52.176379 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 17 00:28:52.176393 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:28:52.176405 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:28:52.176418 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 17 00:28:52.176437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 17 00:28:52.176452 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 17 00:28:52.176466 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 17 00:28:52.176480 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 17 00:28:52.176494 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 17 00:28:52.176508 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 17 00:28:52.176523 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 17 00:28:52.176537 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 17 00:28:52.176552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 17 00:28:52.176570 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 17 00:28:52.176585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 17 00:28:52.176599 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 17 00:28:52.176613 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 17 00:28:52.176627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 17 00:28:52.176641 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 17 00:28:52.176655 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 17 00:28:52.176670 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 17 00:28:52.176684 kernel: Zone ranges: Jan 17 00:28:52.176703 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:28:52.176717 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:28:52.176730 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 00:28:52.176744 kernel: Movable zone start for each node Jan 17 00:28:52.176758 kernel: Early memory node ranges Jan 17 00:28:52.176772 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:28:52.176786 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 17 00:28:52.176800 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 17 00:28:52.176814 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 00:28:52.176832 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 17 00:28:52.176847 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:28:52.176861 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:28:52.176875 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 17 00:28:52.176889 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 17 00:28:52.176903 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 17 00:28:52.176917 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:28:52.176931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:28:52.176946 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:28:52.176963 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 17 00:28:52.176978 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:28:52.176992 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 17 00:28:52.177006 kernel: Booting paravirtualized kernel on Hyper-V Jan 17 00:28:52.177021 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:28:52.177035 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:28:52.177049 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:28:52.177063 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:28:52.177078 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:28:52.177095 kernel: Hyper-V: PV spinlocks enabled Jan 17 00:28:52.177110 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:28:52.177125 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:28:52.177140 kernel: random: crng init done Jan 17 00:28:52.177153 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:28:52.177168 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:28:52.177182 kernel: Fallback order for Node 0: 0 Jan 17 00:28:52.177197 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 17 00:28:52.177215 kernel: Policy zone: Normal Jan 17 00:28:52.177242 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:28:52.177257 kernel: software IO TLB: area num 2. Jan 17 00:28:52.177276 kernel: Memory: 8077080K/8387460K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 310120K reserved, 0K cma-reserved) Jan 17 00:28:52.177291 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:28:52.177306 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:28:52.177342 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:28:52.177357 kernel: Dynamic Preempt: voluntary Jan 17 00:28:52.177372 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:28:52.177389 kernel: rcu: RCU event tracing is enabled. Jan 17 00:28:52.177410 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:28:52.177426 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:28:52.177441 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:28:52.177456 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:28:52.177472 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:28:52.177488 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:28:52.177507 kernel: Using NULL legacy PIC Jan 17 00:28:52.177522 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 17 00:28:52.177538 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:28:52.177554 kernel: Console: colour dummy device 80x25 Jan 17 00:28:52.177568 kernel: printk: console [tty1] enabled Jan 17 00:28:52.177584 kernel: printk: console [ttyS0] enabled Jan 17 00:28:52.177599 kernel: printk: bootconsole [earlyser0] disabled Jan 17 00:28:52.177614 kernel: ACPI: Core revision 20230628 Jan 17 00:28:52.177629 kernel: Failed to register legacy timer interrupt Jan 17 00:28:52.177644 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:28:52.177663 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:28:52.177678 kernel: Hyper-V: Using IPI hypercalls Jan 17 00:28:52.177693 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 17 00:28:52.177709 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 17 00:28:52.177724 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 17 00:28:52.177739 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 17 00:28:52.177755 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 17 00:28:52.177770 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 17 00:28:52.177785 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 17 00:28:52.177804 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 00:28:52.177819 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 17 00:28:52.177834 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:28:52.177849 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:28:52.177864 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:28:52.177879 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 00:28:52.177894 kernel: RETBleed: Vulnerable Jan 17 00:28:52.177909 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:28:52.177924 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:28:52.177939 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:28:52.177958 kernel: active return thunk: its_return_thunk Jan 17 00:28:52.177972 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:28:52.177987 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:28:52.178002 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:28:52.178017 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:28:52.178032 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:28:52.178047 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:28:52.178062 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:28:52.178077 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:28:52.178091 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 17 00:28:52.178106 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 17 00:28:52.178125 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 17 00:28:52.178139 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 17 00:28:52.178154 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:28:52.178170 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:28:52.178184 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:28:52.178199 kernel: landlock: Up and running. Jan 17 00:28:52.178214 kernel: SELinux: Initializing. Jan 17 00:28:52.178230 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:28:52.178245 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:28:52.178261 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 00:28:52.178277 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:28:52.178297 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:28:52.178353 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:28:52.178365 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 00:28:52.178377 kernel: signal: max sigframe size: 3632 Jan 17 00:28:52.178389 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:28:52.178401 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:28:52.178413 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:28:52.178425 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:28:52.178437 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:28:52.178453 kernel: .... node #0, CPUs: #1 Jan 17 00:28:52.178465 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 17 00:28:52.178479 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:28:52.178490 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:28:52.178502 kernel: smpboot: Max logical packages: 1 Jan 17 00:28:52.178515 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 17 00:28:52.178526 kernel: devtmpfs: initialized Jan 17 00:28:52.178538 kernel: x86/mm: Memory block size: 128MB Jan 17 00:28:52.178554 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 17 00:28:52.178566 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:28:52.178579 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:28:52.178591 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:28:52.178604 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:28:52.178617 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:28:52.178630 kernel: audit: type=2000 audit(1768609730.030:1): state=initialized audit_enabled=0 res=1 Jan 17 00:28:52.178642 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:28:52.178655 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:28:52.178671 kernel: cpuidle: using governor menu Jan 17 00:28:52.178684 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:28:52.178697 kernel: dca service started, version 1.12.1 Jan 17 00:28:52.178710 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 17 00:28:52.178724 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:28:52.178737 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:28:52.178750 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:28:52.178763 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:28:52.178777 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:28:52.178793 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:28:52.178807 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:28:52.178820 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:28:52.178833 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:28:52.178847 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:28:52.178860 kernel: ACPI: Interpreter enabled Jan 17 00:28:52.178874 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:28:52.178887 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:28:52.178901 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:28:52.178918 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:28:52.178932 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 17 00:28:52.178945 kernel: iommu: Default domain type: Translated Jan 17 00:28:52.178958 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:28:52.178972 kernel: efivars: Registered efivars operations Jan 17 00:28:52.178986 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:28:52.179000 kernel: PCI: System does not support PCI Jan 17 00:28:52.179013 kernel: vgaarb: loaded Jan 17 00:28:52.179027 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 17 00:28:52.179045 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:28:52.179058 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:28:52.179071 kernel: pnp: PnP ACPI init Jan 17 00:28:52.179085 kernel: pnp: PnP ACPI: found 3 devices Jan 17 00:28:52.179098 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:28:52.179112 kernel: NET: Registered PF_INET protocol family Jan 17 00:28:52.179126 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:28:52.179141 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:28:52.179155 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:28:52.179172 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:28:52.179187 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:28:52.179201 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:28:52.179215 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:28:52.179230 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:28:52.179244 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:28:52.179258 kernel: NET: Registered PF_XDP protocol family Jan 17 00:28:52.179272 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:28:52.179286 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:28:52.179304 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Jan 17 00:28:52.179340 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:28:52.179354 kernel: Initialise system trusted keyrings Jan 17 00:28:52.179368 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:28:52.179382 kernel: Key type asymmetric registered Jan 17 00:28:52.179395 kernel: Asymmetric key parser 'x509' registered Jan 17 00:28:52.179409 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:28:52.179423 kernel: io scheduler mq-deadline registered Jan 17 00:28:52.179438 kernel: io scheduler kyber registered Jan 17 00:28:52.179456 kernel: io scheduler bfq registered Jan 17 00:28:52.179470 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:28:52.179482 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:28:52.179495 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:28:52.179513 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:28:52.179530 kernel: i8042: PNP: No PS/2 controller found. Jan 17 00:28:52.179850 kernel: rtc_cmos 00:02: registered as rtc0 Jan 17 00:28:52.179988 kernel: rtc_cmos 00:02: setting system clock to 2026-01-17T00:28:51 UTC (1768609731) Jan 17 00:28:52.180108 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 17 00:28:52.180124 kernel: intel_pstate: CPU model not supported Jan 17 00:28:52.180138 kernel: efifb: probing for efifb Jan 17 00:28:52.180152 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:28:52.180166 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:28:52.180180 kernel: efifb: scrolling: redraw Jan 17 00:28:52.180194 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:28:52.180208 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:28:52.180222 kernel: fb0: EFI VGA frame buffer device Jan 17 00:28:52.180241 kernel: pstore: Using crash dump compression: deflate Jan 17 00:28:52.180255 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:28:52.180269 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:28:52.180283 kernel: Segment Routing with IPv6 Jan 17 00:28:52.180297 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:28:52.180328 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:28:52.180351 kernel: Key type dns_resolver registered Jan 17 00:28:52.180366 kernel: IPI shorthand broadcast: enabled Jan 17 00:28:52.180382 kernel: sched_clock: Marking stable (1214004400, 51876000)->(1512322900, -246442500) Jan 17 00:28:52.180401 kernel: registered taskstats version 1 Jan 17 00:28:52.180416 kernel: Loading compiled-in X.509 certificates Jan 17 00:28:52.180431 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:28:52.180446 kernel: Key type .fscrypt registered Jan 17 00:28:52.180461 kernel: Key type fscrypt-provisioning registered Jan 17 00:28:52.180476 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:28:52.180491 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:28:52.180506 kernel: ima: No architecture policies found Jan 17 00:28:52.180520 kernel: clk: Disabling unused clocks Jan 17 00:28:52.180539 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:28:52.180554 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:28:52.180569 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:28:52.180585 kernel: Run /init as init process Jan 17 00:28:52.180600 kernel: with arguments: Jan 17 00:28:52.180614 kernel: /init Jan 17 00:28:52.180629 kernel: with environment: Jan 17 00:28:52.180644 kernel: HOME=/ Jan 17 00:28:52.180659 kernel: TERM=linux Jan 17 00:28:52.180681 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:28:52.180700 systemd[1]: Detected virtualization microsoft. Jan 17 00:28:52.180716 systemd[1]: Detected architecture x86-64. Jan 17 00:28:52.180730 systemd[1]: Running in initrd. Jan 17 00:28:52.180746 systemd[1]: No hostname configured, using default hostname. Jan 17 00:28:52.180762 systemd[1]: Hostname set to . Jan 17 00:28:52.180778 systemd[1]: Initializing machine ID from random generator. Jan 17 00:28:52.180799 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:28:52.180815 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:28:52.180831 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:28:52.180848 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:28:52.180865 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:28:52.180881 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:28:52.180897 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:28:52.180920 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:28:52.180937 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:28:52.180953 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:28:52.180969 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:28:52.180985 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:28:52.181001 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:28:52.181017 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:28:52.181034 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:28:52.181054 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:28:52.181070 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:28:52.181086 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:28:52.181104 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:28:52.181120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:28:52.181136 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:28:52.181152 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:28:52.181168 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:28:52.181184 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:28:52.181203 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:28:52.181220 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:28:52.181235 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:28:52.181251 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:28:52.181267 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:28:52.181334 systemd-journald[177]: Collecting audit messages is disabled. Jan 17 00:28:52.181377 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:28:52.181396 systemd-journald[177]: Journal started Jan 17 00:28:52.181429 systemd-journald[177]: Runtime Journal (/run/log/journal/72f29538793f429b8fe04252055d9c92) is 8.0M, max 158.8M, 150.8M free. Jan 17 00:28:52.190344 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:28:52.194943 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:28:52.198820 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:28:52.205848 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:28:52.216132 systemd-modules-load[178]: Inserted module 'overlay' Jan 17 00:28:52.225694 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:28:52.251985 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:28:52.253293 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:28:52.277221 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:28:52.292603 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:28:52.284097 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:28:52.304351 kernel: Bridge firewalling registered Jan 17 00:28:52.304533 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 17 00:28:52.307811 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:28:52.311876 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:28:52.318507 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:28:52.327542 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:28:52.355080 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:28:52.363083 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:28:52.367019 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:28:52.383614 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:28:52.402557 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:28:52.413431 dracut-cmdline[212]: dracut-dracut-053 Jan 17 00:28:52.418035 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:28:52.478918 systemd-resolved[213]: Positive Trust Anchors: Jan 17 00:28:52.478942 systemd-resolved[213]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:28:52.478994 systemd-resolved[213]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:28:52.511596 systemd-resolved[213]: Defaulting to hostname 'linux'. Jan 17 00:28:52.516871 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:28:52.526604 kernel: SCSI subsystem initialized Jan 17 00:28:52.526942 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:28:52.543337 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:28:52.555346 kernel: iscsi: registered transport (tcp) Jan 17 00:28:52.578937 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:28:52.579084 kernel: QLogic iSCSI HBA Driver Jan 17 00:28:52.618676 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:28:52.627680 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:28:52.662270 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:28:52.662433 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:28:52.665946 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:28:52.710348 kernel: raid6: avx512x4 gen() 26739 MB/s Jan 17 00:28:52.729326 kernel: raid6: avx512x2 gen() 26633 MB/s Jan 17 00:28:52.748326 kernel: raid6: avx512x1 gen() 26440 MB/s Jan 17 00:28:52.768329 kernel: raid6: avx2x4 gen() 21757 MB/s Jan 17 00:28:52.787340 kernel: raid6: avx2x2 gen() 22107 MB/s Jan 17 00:28:52.807510 kernel: raid6: avx2x1 gen() 20598 MB/s Jan 17 00:28:52.807555 kernel: raid6: using algorithm avx512x4 gen() 26739 MB/s Jan 17 00:28:52.829213 kernel: raid6: .... xor() 5644 MB/s, rmw enabled Jan 17 00:28:52.829251 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:28:52.853348 kernel: xor: automatically using best checksumming function avx Jan 17 00:28:53.003345 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:28:53.014790 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:28:53.023730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:28:53.036995 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 17 00:28:53.041707 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:28:53.064552 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:28:53.079228 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Jan 17 00:28:53.110926 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:28:53.122531 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:28:53.168436 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:28:53.180559 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:28:53.217549 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:28:53.230071 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:28:53.237604 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:28:53.244940 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:28:53.256628 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:28:53.271506 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:28:53.293675 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:28:53.293875 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:28:53.305307 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:28:53.311793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:28:53.336340 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:28:53.336394 kernel: AES CTR mode by8 optimization enabled Jan 17 00:28:53.336414 kernel: hv_vmbus: Vmbus version:5.2 Jan 17 00:28:53.312087 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:28:53.325863 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:28:53.343840 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:28:53.350483 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:28:53.371250 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:28:53.371329 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:28:53.394253 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:28:53.400928 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:28:53.410008 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:28:53.425771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:28:53.438348 kernel: PTP clock support registered Jan 17 00:28:53.438414 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:28:53.447644 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 17 00:28:53.453335 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:28:53.463838 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:28:53.471757 kernel: scsi host1: storvsc_host_t Jan 17 00:28:53.471865 kernel: scsi host0: storvsc_host_t Jan 17 00:28:53.471888 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:28:53.478481 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:28:53.486826 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:28:53.499556 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:28:53.499666 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:28:53.504663 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:28:53.504732 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 17 00:28:53.504775 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:28:53.510548 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:28:53.387426 systemd-resolved[213]: Clock change detected. Flushing caches. Jan 17 00:28:53.406386 systemd-journald[177]: Time jumped backwards, rotating. Jan 17 00:28:53.394995 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:28:53.414095 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:28:53.428507 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 17 00:28:53.429132 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:28:53.448186 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:28:53.448477 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:28:53.448493 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:28:53.448636 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:28:53.457012 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:28:53.457372 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:28:53.460203 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:28:53.460520 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:28:53.464816 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:28:53.470796 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:28:53.474772 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:28:53.582666 kernel: hv_netvsc 000d3ab3-5551-000d-3ab3-5551000d3ab3 eth0: VF slot 1 added Jan 17 00:28:53.591774 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:28:53.596787 kernel: hv_pci f425f880-96b3-4764-a3ae-c74c90fa0b75: PCI VMBus probing: Using version 0x10004 Jan 17 00:28:53.614790 kernel: hv_pci f425f880-96b3-4764-a3ae-c74c90fa0b75: PCI host bridge to bus 96b3:00 Jan 17 00:28:53.622838 kernel: pci_bus 96b3:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 17 00:28:53.623113 kernel: pci_bus 96b3:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:28:53.623231 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (449) Jan 17 00:28:53.644771 kernel: pci 96b3:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 17 00:28:53.644888 kernel: pci 96b3:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 00:28:53.644923 kernel: pci 96b3:00:02.0: enabling Extended Tags Jan 17 00:28:53.644955 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (446) Jan 17 00:28:53.660768 kernel: pci 96b3:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 96b3:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 17 00:28:53.668794 kernel: pci_bus 96b3:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:28:53.678227 kernel: pci 96b3:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 00:28:53.672948 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:28:53.694162 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:28:53.712468 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:28:53.721811 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:28:53.739830 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:28:53.766072 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:28:53.788796 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:28:53.802336 kernel: GPT:disk_guids don't match. Jan 17 00:28:53.802429 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:28:53.802464 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:28:53.812764 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:28:53.993290 kernel: mlx5_core 96b3:00:02.0: enabling device (0000 -> 0002) Jan 17 00:28:53.997766 kernel: mlx5_core 96b3:00:02.0: firmware version: 14.30.5026 Jan 17 00:28:54.228765 kernel: hv_netvsc 000d3ab3-5551-000d-3ab3-5551000d3ab3 eth0: VF registering: eth1 Jan 17 00:28:54.240935 kernel: mlx5_core 96b3:00:02.0 eth1: joined to eth0 Jan 17 00:28:54.241266 kernel: mlx5_core 96b3:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 17 00:28:54.259775 kernel: mlx5_core 96b3:00:02.0 enP38579s1: renamed from eth1 Jan 17 00:28:54.814766 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:28:54.816140 disk-uuid[595]: The operation has completed successfully. Jan 17 00:28:54.908098 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:28:54.908232 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:28:54.939253 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:28:54.945890 sh[718]: Success Jan 17 00:28:54.963968 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:28:55.062675 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:28:55.076915 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:28:55.082456 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:28:55.110776 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:28:55.110844 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:28:55.117484 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:28:55.120451 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:28:55.123299 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:28:55.187952 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:28:55.194285 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:28:55.205038 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:28:55.217979 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:28:55.237377 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:28:55.237474 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:28:55.239959 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:28:55.256935 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:28:55.274890 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:28:55.274973 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:28:55.289143 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:28:55.296088 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:28:55.345925 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:28:55.356198 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:28:55.395970 systemd-networkd[902]: lo: Link UP Jan 17 00:28:55.395982 systemd-networkd[902]: lo: Gained carrier Jan 17 00:28:55.398333 systemd-networkd[902]: Enumeration completed Jan 17 00:28:55.398537 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:28:55.401247 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:28:55.401252 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:28:55.402309 systemd[1]: Reached target network.target - Network. Jan 17 00:28:55.477783 kernel: mlx5_core 96b3:00:02.0 enP38579s1: Link up Jan 17 00:28:55.515285 kernel: hv_netvsc 000d3ab3-5551-000d-3ab3-5551000d3ab3 eth0: Data path switched to VF: enP38579s1 Jan 17 00:28:55.515660 systemd-networkd[902]: enP38579s1: Link UP Jan 17 00:28:55.515924 systemd-networkd[902]: eth0: Link UP Jan 17 00:28:55.516105 systemd-networkd[902]: eth0: Gained carrier Jan 17 00:28:55.516116 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:28:55.535819 systemd-networkd[902]: enP38579s1: Gained carrier Jan 17 00:28:55.561855 systemd-networkd[902]: eth0: DHCPv4 address 10.200.8.33/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 00:28:55.576005 ignition[837]: Ignition 2.19.0 Jan 17 00:28:55.576020 ignition[837]: Stage: fetch-offline Jan 17 00:28:55.578959 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:28:55.576081 ignition[837]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:55.576108 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:55.576267 ignition[837]: parsed url from cmdline: "" Jan 17 00:28:55.576273 ignition[837]: no config URL provided Jan 17 00:28:55.576281 ignition[837]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:28:55.576293 ignition[837]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:28:55.598152 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:28:55.576303 ignition[837]: failed to fetch config: resource requires networking Jan 17 00:28:55.576870 ignition[837]: Ignition finished successfully Jan 17 00:28:55.622858 ignition[911]: Ignition 2.19.0 Jan 17 00:28:55.622871 ignition[911]: Stage: fetch Jan 17 00:28:55.623140 ignition[911]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:55.623154 ignition[911]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:55.623281 ignition[911]: parsed url from cmdline: "" Jan 17 00:28:55.623286 ignition[911]: no config URL provided Jan 17 00:28:55.623293 ignition[911]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:28:55.623302 ignition[911]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:28:55.623326 ignition[911]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:28:55.706619 ignition[911]: GET result: OK Jan 17 00:28:55.706792 ignition[911]: config has been read from IMDS userdata Jan 17 00:28:55.706835 ignition[911]: parsing config with SHA512: 6ab3bb74c65cb72048ee98ad714aa3d9064a8525b0283225a624aef6967675c1eef3b66003c9f0e64be5f841cdd2ac2e5ac424585b4504fa15e29bc61903bd11 Jan 17 00:28:55.712944 unknown[911]: fetched base config from "system" Jan 17 00:28:55.712958 unknown[911]: fetched base config from "system" Jan 17 00:28:55.713479 ignition[911]: fetch: fetch complete Jan 17 00:28:55.712966 unknown[911]: fetched user config from "azure" Jan 17 00:28:55.713485 ignition[911]: fetch: fetch passed Jan 17 00:28:55.715441 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:28:55.713538 ignition[911]: Ignition finished successfully Jan 17 00:28:55.726234 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:28:55.745603 ignition[917]: Ignition 2.19.0 Jan 17 00:28:55.745617 ignition[917]: Stage: kargs Jan 17 00:28:55.749057 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:28:55.745910 ignition[917]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:55.745927 ignition[917]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:55.747117 ignition[917]: kargs: kargs passed Jan 17 00:28:55.763167 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:28:55.747177 ignition[917]: Ignition finished successfully Jan 17 00:28:55.783014 ignition[923]: Ignition 2.19.0 Jan 17 00:28:55.783028 ignition[923]: Stage: disks Jan 17 00:28:55.783292 ignition[923]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:55.783310 ignition[923]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:55.785109 ignition[923]: disks: disks passed Jan 17 00:28:55.785187 ignition[923]: Ignition finished successfully Jan 17 00:28:55.798078 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:28:55.803884 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:28:55.807538 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:28:55.816502 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:28:55.816637 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:28:55.817505 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:28:55.835208 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:28:55.869227 systemd-fsck[931]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:28:55.876049 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:28:55.892239 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:28:55.990768 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:28:55.991977 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:28:55.997298 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:28:56.014940 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:28:56.029792 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (942) Jan 17 00:28:56.035780 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:28:56.031054 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:28:56.047775 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:28:56.047861 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:28:56.048840 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:28:56.058685 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:28:56.054637 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:28:56.058747 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:28:56.076315 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:28:56.081873 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:28:56.092017 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:28:56.254086 coreos-metadata[957]: Jan 17 00:28:56.253 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:28:56.262462 coreos-metadata[957]: Jan 17 00:28:56.262 INFO Fetch successful Jan 17 00:28:56.266083 coreos-metadata[957]: Jan 17 00:28:56.264 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:28:56.283283 coreos-metadata[957]: Jan 17 00:28:56.283 INFO Fetch successful Jan 17 00:28:56.286317 coreos-metadata[957]: Jan 17 00:28:56.285 INFO wrote hostname ci-4081.3.6-n-2e1a0c4804 to /sysroot/etc/hostname Jan 17 00:28:56.291048 initrd-setup-root[971]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:28:56.288351 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:28:56.318637 initrd-setup-root[979]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:28:56.325988 initrd-setup-root[986]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:28:56.334773 initrd-setup-root[993]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:28:56.615528 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:28:56.626070 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:28:56.633912 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:28:56.642653 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:28:56.649161 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:28:56.679453 ignition[1061]: INFO : Ignition 2.19.0 Jan 17 00:28:56.685862 ignition[1061]: INFO : Stage: mount Jan 17 00:28:56.685862 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:56.685862 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:56.685862 ignition[1061]: INFO : mount: mount passed Jan 17 00:28:56.685862 ignition[1061]: INFO : Ignition finished successfully Jan 17 00:28:56.687162 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:28:56.706945 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:28:56.714761 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:28:56.730202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:28:56.750769 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1072) Jan 17 00:28:56.758400 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:28:56.758509 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:28:56.761251 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:28:56.768772 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:28:56.770865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:28:56.798292 ignition[1089]: INFO : Ignition 2.19.0 Jan 17 00:28:56.801239 ignition[1089]: INFO : Stage: files Jan 17 00:28:56.801239 ignition[1089]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:56.801239 ignition[1089]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:56.801239 ignition[1089]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:28:56.813759 ignition[1089]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:28:56.813759 ignition[1089]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:28:56.830961 ignition[1089]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:28:56.837058 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 17 00:28:56.831507 unknown[1089]: wrote ssh authorized keys file for user: core Jan 17 00:28:56.895719 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 00:28:56.949227 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:28:56.955723 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jan 17 00:28:57.341339 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 00:28:57.463397 systemd-networkd[902]: eth0: Gained IPv6LL Jan 17 00:28:57.660653 ignition[1089]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jan 17 00:28:57.660653 ignition[1089]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 00:28:57.672934 ignition[1089]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:28:57.678979 ignition[1089]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 00:28:57.678979 ignition[1089]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 00:28:57.678979 ignition[1089]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 00:28:57.692150 ignition[1089]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:28:57.692150 ignition[1089]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:28:57.692150 ignition[1089]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 00:28:57.692150 ignition[1089]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:28:57.712646 ignition[1089]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:28:57.712646 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:28:57.712646 ignition[1089]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:28:57.712646 ignition[1089]: INFO : files: files passed Jan 17 00:28:57.712646 ignition[1089]: INFO : Ignition finished successfully Jan 17 00:28:57.708907 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:28:57.736144 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:28:57.743227 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:28:57.754341 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:28:57.754478 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:28:57.768308 initrd-setup-root-after-ignition[1118]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:28:57.773847 initrd-setup-root-after-ignition[1122]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:28:57.784179 initrd-setup-root-after-ignition[1118]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:28:57.775492 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:28:57.792386 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:28:57.801109 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:28:57.845973 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:28:57.846128 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:28:57.849255 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:28:57.849383 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:28:57.850367 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:28:57.852939 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:28:57.871630 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:28:57.889069 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:28:57.914388 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:28:57.917948 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:28:57.927516 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:28:57.932263 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:28:57.932470 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:28:57.939562 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:28:57.945234 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:28:57.951428 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:28:57.957941 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:28:57.964573 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:28:57.971129 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:28:57.979919 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:28:57.987007 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:28:57.990209 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:28:57.995699 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:28:58.001207 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:28:58.001392 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:28:58.009040 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:28:58.014842 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:28:58.021478 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:28:58.023986 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:28:58.030515 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:28:58.030703 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:28:58.046303 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:28:58.046545 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:28:58.056794 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:28:58.057042 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:28:58.061997 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:28:58.062185 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:28:58.082363 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:28:58.085263 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:28:58.087881 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:28:58.095311 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:28:58.104879 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:28:58.105213 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:28:58.109187 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:28:58.123311 ignition[1142]: INFO : Ignition 2.19.0 Jan 17 00:28:58.123311 ignition[1142]: INFO : Stage: umount Jan 17 00:28:58.123311 ignition[1142]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:28:58.123311 ignition[1142]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:28:58.123311 ignition[1142]: INFO : umount: umount passed Jan 17 00:28:58.123311 ignition[1142]: INFO : Ignition finished successfully Jan 17 00:28:58.109998 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:28:58.125921 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:28:58.126034 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:28:58.135818 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:28:58.135953 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:28:58.142657 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:28:58.143057 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:28:58.146402 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:28:58.146469 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:28:58.153582 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:28:58.153664 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:28:58.160527 systemd[1]: Stopped target network.target - Network. Jan 17 00:28:58.166137 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:28:58.168964 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:28:58.172600 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:28:58.177980 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:28:58.178065 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:28:58.184682 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:28:58.187362 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:28:58.192716 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:28:58.192811 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:28:58.198473 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:28:58.198544 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:28:58.204408 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:28:58.204499 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:28:58.211503 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:28:58.211576 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:28:58.217904 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:28:58.230851 systemd-networkd[902]: eth0: DHCPv6 lease lost Jan 17 00:28:58.232124 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:28:58.241351 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:28:58.242348 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:28:58.242484 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:28:58.248849 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:28:58.248985 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:28:58.254515 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:28:58.254614 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:28:58.287829 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:28:58.290821 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:28:58.290938 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:28:58.299927 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:28:58.300018 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:28:58.305907 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:28:58.305972 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:28:58.311828 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:28:58.311893 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:28:58.353304 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:28:58.372140 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:28:58.374725 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:28:58.382304 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:28:58.382414 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:28:58.391650 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:28:58.391719 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:28:58.400486 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:28:58.400596 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:28:58.408636 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:28:58.408732 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:28:58.424108 kernel: hv_netvsc 000d3ab3-5551-000d-3ab3-5551000d3ab3 eth0: Data path switched from VF: enP38579s1 Jan 17 00:28:58.416622 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:28:58.416700 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:28:58.431128 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:28:58.433950 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:28:58.434061 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:28:58.437153 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:28:58.437236 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:28:58.449452 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:28:58.449592 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:28:58.460692 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:28:58.460829 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:28:58.720654 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:28:58.720874 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:28:58.726931 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:28:58.732623 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:28:58.732733 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:28:58.748142 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:28:58.782925 systemd[1]: Switching root. Jan 17 00:28:58.804755 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 17 00:28:58.804891 systemd-journald[177]: Journal stopped Jan 17 00:29:01.333417 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:29:01.333462 kernel: SELinux: policy capability open_perms=1 Jan 17 00:29:01.333481 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:29:01.333491 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:29:01.333499 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:29:01.333518 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:29:01.333538 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:29:01.333555 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:29:01.333564 kernel: audit: type=1403 audit(1768609739.809:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:29:01.333584 systemd[1]: Successfully loaded SELinux policy in 74.518ms. Jan 17 00:29:01.333608 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.250ms. Jan 17 00:29:01.333629 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:29:01.333639 systemd[1]: Detected virtualization microsoft. Jan 17 00:29:01.333649 systemd[1]: Detected architecture x86-64. Jan 17 00:29:01.333662 systemd[1]: Detected first boot. Jan 17 00:29:01.333677 systemd[1]: Hostname set to . Jan 17 00:29:01.333704 systemd[1]: Initializing machine ID from random generator. Jan 17 00:29:01.333719 zram_generator::config[1203]: No configuration found. Jan 17 00:29:01.333730 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:29:01.333759 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:29:01.333776 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:29:01.333787 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:29:01.333807 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:29:01.333824 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:29:01.333836 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:29:01.333860 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:29:01.333887 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:29:01.333897 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:29:01.333908 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:29:01.333922 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:29:01.333945 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:29:01.333968 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:29:01.333987 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:29:01.333997 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:29:01.334018 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:29:01.334041 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:29:01.334065 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:29:01.334079 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:29:01.334089 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:29:01.334103 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:29:01.334133 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:29:01.334150 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:29:01.334161 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:29:01.334187 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:29:01.334206 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:29:01.334217 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:29:01.334227 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:29:01.334239 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:29:01.334250 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:29:01.334260 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:29:01.334282 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:29:01.334304 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:29:01.334324 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:29:01.334334 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:29:01.334351 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:29:01.334375 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:29:01.334396 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:29:01.334407 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:29:01.334418 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:29:01.334440 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:29:01.334466 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:29:01.334487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:29:01.334498 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:29:01.334521 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:29:01.334538 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:29:01.334555 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:29:01.334578 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:29:01.334603 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 00:29:01.334629 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 00:29:01.334651 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:29:01.334675 kernel: loop: module loaded Jan 17 00:29:01.334704 kernel: fuse: init (API version 7.39) Jan 17 00:29:01.334732 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:29:01.334765 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:29:01.334788 kernel: ACPI: bus type drm_connector registered Jan 17 00:29:01.334808 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:29:01.334831 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:29:01.334854 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:29:01.334921 systemd-journald[1322]: Collecting audit messages is disabled. Jan 17 00:29:01.334966 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:29:01.334991 systemd-journald[1322]: Journal started Jan 17 00:29:01.335042 systemd-journald[1322]: Runtime Journal (/run/log/journal/21386d5659e743ac8785f74c19deae80) is 8.0M, max 158.8M, 150.8M free. Jan 17 00:29:01.343777 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:29:01.348555 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:29:01.353032 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:29:01.356491 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:29:01.360281 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:29:01.364999 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:29:01.368688 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:29:01.373457 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:29:01.378895 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:29:01.379211 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:29:01.384268 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:29:01.384567 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:29:01.389265 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:29:01.389573 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:29:01.394890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:29:01.395151 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:29:01.400016 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:29:01.400284 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:29:01.405369 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:29:01.405656 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:29:01.409632 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:29:01.413954 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:29:01.420117 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:29:01.439059 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:29:01.461052 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:29:01.467905 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:29:01.471300 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:29:01.482081 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:29:01.489984 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:29:01.493260 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:29:01.506602 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:29:01.510104 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:29:01.517907 systemd-journald[1322]: Time spent on flushing to /var/log/journal/21386d5659e743ac8785f74c19deae80 is 158.964ms for 944 entries. Jan 17 00:29:01.517907 systemd-journald[1322]: System Journal (/var/log/journal/21386d5659e743ac8785f74c19deae80) is 11.8M, max 2.6G, 2.6G free. Jan 17 00:29:01.739661 systemd-journald[1322]: Received client request to flush runtime journal. Jan 17 00:29:01.739719 systemd-journald[1322]: /var/log/journal/21386d5659e743ac8785f74c19deae80/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 17 00:29:01.739795 systemd-journald[1322]: Rotating system journal. Jan 17 00:29:01.511957 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:29:01.535788 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:29:01.550089 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:29:01.557902 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:29:01.561639 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:29:01.574471 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:29:01.582870 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:29:01.594766 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:29:01.635355 udevadm[1370]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:29:01.691502 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:29:01.744766 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:29:01.753380 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Jan 17 00:29:01.753405 systemd-tmpfiles[1363]: ACLs are not supported, ignoring. Jan 17 00:29:01.761604 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:29:01.778996 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:29:01.836977 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:29:01.855084 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:29:01.886149 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jan 17 00:29:01.886179 systemd-tmpfiles[1386]: ACLs are not supported, ignoring. Jan 17 00:29:01.894065 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:29:02.332761 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:29:02.343059 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:29:02.373451 systemd-udevd[1392]: Using default interface naming scheme 'v255'. Jan 17 00:29:02.446803 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:29:02.464001 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:29:02.511000 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:29:02.550755 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 00:29:02.647720 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:29:02.698772 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:29:02.717874 kernel: hv_vmbus: registering driver hyperv_fb Jan 17 00:29:02.730761 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 17 00:29:02.738781 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 17 00:29:02.753343 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:29:02.761082 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:29:02.761170 kernel: hv_vmbus: registering driver hv_balloon Jan 17 00:29:02.770839 systemd-networkd[1399]: lo: Link UP Jan 17 00:29:02.772565 systemd-networkd[1399]: lo: Gained carrier Jan 17 00:29:02.780487 systemd-networkd[1399]: Enumeration completed Jan 17 00:29:02.781230 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:29:02.788780 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 17 00:29:02.789703 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:29:02.790782 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:29:02.798231 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:29:02.952786 kernel: mlx5_core 96b3:00:02.0 enP38579s1: Link up Jan 17 00:29:02.976778 kernel: hv_netvsc 000d3ab3-5551-000d-3ab3-5551000d3ab3 eth0: Data path switched to VF: enP38579s1 Jan 17 00:29:02.983286 systemd-networkd[1399]: enP38579s1: Link UP Jan 17 00:29:02.983514 systemd-networkd[1399]: eth0: Link UP Jan 17 00:29:02.983521 systemd-networkd[1399]: eth0: Gained carrier Jan 17 00:29:02.983552 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:29:02.988159 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:29:02.992678 systemd-networkd[1399]: enP38579s1: Gained carrier Jan 17 00:29:03.019870 systemd-networkd[1399]: eth0: DHCPv4 address 10.200.8.33/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 00:29:03.028193 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:29:03.028535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:29:03.049101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:29:03.087382 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1406) Jan 17 00:29:03.091038 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:29:03.091424 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:29:03.112167 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:29:03.215249 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:29:03.288867 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 17 00:29:03.322559 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:29:03.330998 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:29:03.336696 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:29:03.359674 lvm[1489]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:29:03.392654 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:29:03.399506 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:29:03.411979 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:29:03.417969 lvm[1494]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:29:03.445542 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:29:03.449898 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:29:03.453364 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:29:03.453404 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:29:03.456858 systemd[1]: Reached target machines.target - Containers. Jan 17 00:29:03.460665 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:29:03.468943 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:29:03.475952 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:29:03.479119 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:29:03.488099 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:29:03.493025 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:29:03.499913 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:29:03.504677 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:29:03.525254 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:29:03.540766 kernel: loop0: detected capacity change from 0 to 224512 Jan 17 00:29:03.561375 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:29:03.564286 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:29:03.573769 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:29:03.600768 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 00:29:03.724035 kernel: loop2: detected capacity change from 0 to 31056 Jan 17 00:29:03.833887 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 00:29:03.965169 kernel: loop4: detected capacity change from 0 to 224512 Jan 17 00:29:03.984774 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 00:29:04.002790 kernel: loop6: detected capacity change from 0 to 31056 Jan 17 00:29:04.026793 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 00:29:04.055323 (sd-merge)[1515]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 17 00:29:04.055929 (sd-merge)[1515]: Merged extensions into '/usr'. Jan 17 00:29:04.061251 systemd[1]: Reloading requested from client PID 1502 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:29:04.061279 systemd[1]: Reloading... Jan 17 00:29:04.167771 zram_generator::config[1545]: No configuration found. Jan 17 00:29:04.309924 systemd-networkd[1399]: eth0: Gained IPv6LL Jan 17 00:29:04.343768 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:29:04.420394 systemd[1]: Reloading finished in 358 ms. Jan 17 00:29:04.442942 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:29:04.451577 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:29:04.470155 systemd[1]: Starting ensure-sysext.service... Jan 17 00:29:04.475930 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:29:04.484917 systemd[1]: Reloading requested from client PID 1609 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:29:04.485116 systemd[1]: Reloading... Jan 17 00:29:04.539267 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:29:04.540526 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:29:04.542665 systemd-tmpfiles[1610]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:29:04.545028 systemd-tmpfiles[1610]: ACLs are not supported, ignoring. Jan 17 00:29:04.545114 systemd-tmpfiles[1610]: ACLs are not supported, ignoring. Jan 17 00:29:04.560092 systemd-tmpfiles[1610]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:29:04.560111 systemd-tmpfiles[1610]: Skipping /boot Jan 17 00:29:04.585005 systemd-tmpfiles[1610]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:29:04.585029 systemd-tmpfiles[1610]: Skipping /boot Jan 17 00:29:04.620772 zram_generator::config[1644]: No configuration found. Jan 17 00:29:04.790335 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:29:04.881628 systemd[1]: Reloading finished in 395 ms. Jan 17 00:29:04.907999 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:29:04.925180 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:29:04.941071 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:29:04.950105 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:29:04.966028 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:29:04.983045 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:29:05.001840 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:29:05.002126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:29:05.010314 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:29:05.030130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:29:05.044149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:29:05.050962 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:29:05.051203 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:29:05.054209 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:29:05.054468 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:29:05.078661 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:29:05.081088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:29:05.089916 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:29:05.092074 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:29:05.112486 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:29:05.131142 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:29:05.131544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:29:05.139364 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:29:05.150079 augenrules[1738]: No rules Jan 17 00:29:05.160103 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:29:05.160281 systemd-resolved[1710]: Positive Trust Anchors: Jan 17 00:29:05.160302 systemd-resolved[1710]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:29:05.160357 systemd-resolved[1710]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:29:05.171106 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:29:05.194165 systemd-resolved[1710]: Using system hostname 'ci-4081.3.6-n-2e1a0c4804'. Jan 17 00:29:05.197151 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:29:05.205217 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:29:05.206483 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:29:05.210720 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:29:05.214503 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:29:05.220363 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:29:05.225194 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:29:05.230023 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:29:05.230274 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:29:05.234488 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:29:05.234690 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:29:05.238372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:29:05.238608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:29:05.243186 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:29:05.243440 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:29:05.256321 systemd[1]: Finished ensure-sysext.service. Jan 17 00:29:05.267349 systemd[1]: Reached target network.target - Network. Jan 17 00:29:05.270113 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:29:05.274285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:29:05.278178 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:29:05.278270 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:29:05.324793 ldconfig[1498]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:29:05.338391 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:29:05.344548 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:29:05.353981 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:29:05.362435 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:29:05.376980 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:29:05.381160 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:29:05.384856 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:29:05.389149 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:29:05.394131 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:29:05.397653 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:29:05.401815 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:29:05.406654 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:29:05.406714 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:29:05.409390 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:29:05.413044 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:29:05.418559 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:29:05.423513 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:29:05.428242 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:29:05.431880 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:29:05.435046 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:29:05.437835 systemd[1]: System is tainted: cgroupsv1 Jan 17 00:29:05.437911 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:29:05.437947 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:29:05.445897 systemd[1]: Starting chronyd.service - NTP client/server... Jan 17 00:29:05.453957 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:29:05.461105 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:29:05.475987 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:29:05.492947 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:29:05.511132 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:29:05.514559 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:29:05.514628 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 17 00:29:05.518555 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 17 00:29:05.525115 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 17 00:29:05.530221 jq[1776]: false Jan 17 00:29:05.541890 chronyd[1783]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 17 00:29:05.530693 (chronyd)[1771]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 17 00:29:05.537561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:29:05.558579 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:29:05.560488 chronyd[1783]: Timezone right/UTC failed leap second check, ignoring Jan 17 00:29:05.560728 chronyd[1783]: Loaded seccomp filter (level 2) Jan 17 00:29:05.561698 KVP[1780]: KVP starting; pid is:1780 Jan 17 00:29:05.577106 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:29:05.587785 kernel: hv_utils: KVP IC version 4.0 Jan 17 00:29:05.585381 KVP[1780]: KVP LIC Version: 3.1 Jan 17 00:29:05.592648 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:29:05.597780 extend-filesystems[1779]: Found loop4 Jan 17 00:29:05.597780 extend-filesystems[1779]: Found loop5 Jan 17 00:29:05.597780 extend-filesystems[1779]: Found loop6 Jan 17 00:29:05.597780 extend-filesystems[1779]: Found loop7 Jan 17 00:29:05.597780 extend-filesystems[1779]: Found sda Jan 17 00:29:05.597780 extend-filesystems[1779]: Found sda1 Jan 17 00:29:05.597780 extend-filesystems[1779]: Found sda2 Jan 17 00:29:05.597780 extend-filesystems[1779]: Found sda3 Jan 17 00:29:05.597780 extend-filesystems[1779]: Found usr Jan 17 00:29:05.597780 extend-filesystems[1779]: Found sda4 Jan 17 00:29:05.597780 extend-filesystems[1779]: Found sda6 Jan 17 00:29:05.597780 extend-filesystems[1779]: Found sda7 Jan 17 00:29:05.597780 extend-filesystems[1779]: Found sda9 Jan 17 00:29:05.597780 extend-filesystems[1779]: Checking size of /dev/sda9 Jan 17 00:29:05.607156 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:29:05.622446 dbus-daemon[1774]: [system] SELinux support is enabled Jan 17 00:29:05.665982 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:29:05.682984 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:29:05.687807 extend-filesystems[1779]: Old size kept for /dev/sda9 Jan 17 00:29:05.687807 extend-filesystems[1779]: Found sr0 Jan 17 00:29:05.691127 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:29:05.703546 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:29:05.718144 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:29:05.723533 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:29:05.751672 systemd[1]: Started chronyd.service - NTP client/server. Jan 17 00:29:05.760388 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:29:05.761287 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:29:05.764928 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:29:05.765365 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:29:05.782408 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:29:05.784365 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:29:05.792847 coreos-metadata[1773]: Jan 17 00:29:05.791 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:29:05.795830 jq[1815]: true Jan 17 00:29:05.805213 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:29:05.816900 coreos-metadata[1773]: Jan 17 00:29:05.814 INFO Fetch successful Jan 17 00:29:05.825419 coreos-metadata[1773]: Jan 17 00:29:05.821 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 17 00:29:05.827473 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:29:05.829302 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:29:05.848704 coreos-metadata[1773]: Jan 17 00:29:05.843 INFO Fetch successful Jan 17 00:29:05.858057 coreos-metadata[1773]: Jan 17 00:29:05.857 INFO Fetching http://168.63.129.16/machine/b32685c1-0a97-44f2-bacf-465f4714b333/ecaae2dc%2Daf82%2D47ef%2Dbd81%2D72facd62d209.%5Fci%2D4081.3.6%2Dn%2D2e1a0c4804?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 17 00:29:05.861305 systemd-logind[1811]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 00:29:05.867313 coreos-metadata[1773]: Jan 17 00:29:05.862 INFO Fetch successful Jan 17 00:29:05.867313 coreos-metadata[1773]: Jan 17 00:29:05.866 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:29:05.866012 systemd-logind[1811]: New seat seat0. Jan 17 00:29:05.871164 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:29:05.873196 (ntainerd)[1828]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:29:05.898432 coreos-metadata[1773]: Jan 17 00:29:05.889 INFO Fetch successful Jan 17 00:29:05.898626 update_engine[1813]: I20260117 00:29:05.897836 1813 main.cc:92] Flatcar Update Engine starting Jan 17 00:29:05.911249 update_engine[1813]: I20260117 00:29:05.901337 1813 update_check_scheduler.cc:74] Next update check in 6m0s Jan 17 00:29:05.912930 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:29:05.914574 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:29:05.924472 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:29:05.924839 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:29:05.932085 dbus-daemon[1774]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 00:29:05.933311 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:29:05.951781 jq[1825]: true Jan 17 00:29:05.975335 tar[1824]: linux-amd64/LICENSE Jan 17 00:29:05.975335 tar[1824]: linux-amd64/helm Jan 17 00:29:05.990162 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:29:05.993162 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:29:06.077486 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:29:06.092358 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:29:06.138774 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1822) Jan 17 00:29:06.157710 bash[1881]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:29:06.158925 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:29:06.179923 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:29:06.324500 locksmithd[1861]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:29:06.749245 containerd[1828]: time="2026-01-17T00:29:06.748681700Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:29:06.843516 containerd[1828]: time="2026-01-17T00:29:06.843205700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:29:06.848380 containerd[1828]: time="2026-01-17T00:29:06.847161900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:29:06.848380 containerd[1828]: time="2026-01-17T00:29:06.847225900Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:29:06.848380 containerd[1828]: time="2026-01-17T00:29:06.847254300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:29:06.848380 containerd[1828]: time="2026-01-17T00:29:06.848024500Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:29:06.848380 containerd[1828]: time="2026-01-17T00:29:06.848064200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:29:06.848380 containerd[1828]: time="2026-01-17T00:29:06.848184600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:29:06.849267 containerd[1828]: time="2026-01-17T00:29:06.849087000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:29:06.850327 containerd[1828]: time="2026-01-17T00:29:06.849499200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:29:06.850327 containerd[1828]: time="2026-01-17T00:29:06.849528900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:29:06.850327 containerd[1828]: time="2026-01-17T00:29:06.849550500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:29:06.850327 containerd[1828]: time="2026-01-17T00:29:06.849565300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:29:06.850327 containerd[1828]: time="2026-01-17T00:29:06.849678300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:29:06.850327 containerd[1828]: time="2026-01-17T00:29:06.850012900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:29:06.851177 containerd[1828]: time="2026-01-17T00:29:06.851137800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:29:06.851266 containerd[1828]: time="2026-01-17T00:29:06.851182100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:29:06.851818 containerd[1828]: time="2026-01-17T00:29:06.851301500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:29:06.852332 containerd[1828]: time="2026-01-17T00:29:06.851367300Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:29:06.872339 containerd[1828]: time="2026-01-17T00:29:06.872272800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:29:06.872863 containerd[1828]: time="2026-01-17T00:29:06.872401700Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:29:06.872863 containerd[1828]: time="2026-01-17T00:29:06.872431100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:29:06.872863 containerd[1828]: time="2026-01-17T00:29:06.872464200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:29:06.872863 containerd[1828]: time="2026-01-17T00:29:06.872489400Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:29:06.872863 containerd[1828]: time="2026-01-17T00:29:06.872730800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:29:06.874892 containerd[1828]: time="2026-01-17T00:29:06.873346000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:29:06.874892 containerd[1828]: time="2026-01-17T00:29:06.873526000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:29:06.874892 containerd[1828]: time="2026-01-17T00:29:06.873550900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:29:06.874892 containerd[1828]: time="2026-01-17T00:29:06.873571200Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:29:06.874892 containerd[1828]: time="2026-01-17T00:29:06.873594900Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:29:06.874892 containerd[1828]: time="2026-01-17T00:29:06.873616100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:29:06.874892 containerd[1828]: time="2026-01-17T00:29:06.873637300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:29:06.874892 containerd[1828]: time="2026-01-17T00:29:06.873658800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:29:06.874892 containerd[1828]: time="2026-01-17T00:29:06.873682200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:29:06.874892 containerd[1828]: time="2026-01-17T00:29:06.873704000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:29:06.874892 containerd[1828]: time="2026-01-17T00:29:06.873722700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.876772700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.876838600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.876884100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.876917000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.876942500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.876962700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.876984200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.877002900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.877024000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.877044500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.877068200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.877093300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.877113500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.877136000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877184 containerd[1828]: time="2026-01-17T00:29:06.877170800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:29:06.877733 containerd[1828]: time="2026-01-17T00:29:06.877216300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877733 containerd[1828]: time="2026-01-17T00:29:06.877234600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877733 containerd[1828]: time="2026-01-17T00:29:06.877250800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:29:06.877733 containerd[1828]: time="2026-01-17T00:29:06.877326200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:29:06.877733 containerd[1828]: time="2026-01-17T00:29:06.877355400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:29:06.877733 containerd[1828]: time="2026-01-17T00:29:06.877374000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:29:06.877733 containerd[1828]: time="2026-01-17T00:29:06.877395000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:29:06.877733 containerd[1828]: time="2026-01-17T00:29:06.877410000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.877733 containerd[1828]: time="2026-01-17T00:29:06.877433800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:29:06.877733 containerd[1828]: time="2026-01-17T00:29:06.877451000Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:29:06.877733 containerd[1828]: time="2026-01-17T00:29:06.877468900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:29:06.878189 containerd[1828]: time="2026-01-17T00:29:06.877964300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:29:06.878189 containerd[1828]: time="2026-01-17T00:29:06.878057000Z" level=info msg="Connect containerd service" Jan 17 00:29:06.878189 containerd[1828]: time="2026-01-17T00:29:06.878130900Z" level=info msg="using legacy CRI server" Jan 17 00:29:06.878189 containerd[1828]: time="2026-01-17T00:29:06.878142900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:29:06.878500 containerd[1828]: time="2026-01-17T00:29:06.878321400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:29:06.890535 containerd[1828]: time="2026-01-17T00:29:06.887863800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:29:06.890535 containerd[1828]: time="2026-01-17T00:29:06.888472100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:29:06.890535 containerd[1828]: time="2026-01-17T00:29:06.888539400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:29:06.890535 containerd[1828]: time="2026-01-17T00:29:06.888681400Z" level=info msg="Start subscribing containerd event" Jan 17 00:29:06.902376 containerd[1828]: time="2026-01-17T00:29:06.896862100Z" level=info msg="Start recovering state" Jan 17 00:29:06.902376 containerd[1828]: time="2026-01-17T00:29:06.897054500Z" level=info msg="Start event monitor" Jan 17 00:29:06.902376 containerd[1828]: time="2026-01-17T00:29:06.897089500Z" level=info msg="Start snapshots syncer" Jan 17 00:29:06.902376 containerd[1828]: time="2026-01-17T00:29:06.897112400Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:29:06.902376 containerd[1828]: time="2026-01-17T00:29:06.897125200Z" level=info msg="Start streaming server" Jan 17 00:29:06.902376 containerd[1828]: time="2026-01-17T00:29:06.897263500Z" level=info msg="containerd successfully booted in 0.152115s" Jan 17 00:29:06.897489 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:29:07.045551 sshd_keygen[1821]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:29:07.096418 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:29:07.113093 tar[1824]: linux-amd64/README.md Jan 17 00:29:07.120030 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:29:07.132571 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 17 00:29:07.141505 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:29:07.141911 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:29:07.152336 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:29:07.171172 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:29:07.195950 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:29:07.213414 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:29:07.228358 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:29:07.233123 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:29:07.244073 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 17 00:29:07.731028 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:29:07.735000 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:29:07.737710 (kubelet)[1958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:29:07.738794 systemd[1]: Startup finished in 618ms (firmware) + 4.859s (loader) + 9.507s (kernel) + 8.002s (userspace) = 22.988s. Jan 17 00:29:07.959378 login[1946]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 00:29:07.962964 login[1947]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 00:29:07.972674 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:29:07.983537 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:29:07.989797 systemd-logind[1811]: New session 2 of user core. Jan 17 00:29:08.005099 systemd-logind[1811]: New session 1 of user core. Jan 17 00:29:08.028009 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:29:08.044399 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:29:08.058507 (systemd)[1971]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:29:08.279272 systemd[1971]: Queued start job for default target default.target. Jan 17 00:29:08.280324 systemd[1971]: Created slice app.slice - User Application Slice. Jan 17 00:29:08.280355 systemd[1971]: Reached target paths.target - Paths. Jan 17 00:29:08.280376 systemd[1971]: Reached target timers.target - Timers. Jan 17 00:29:08.288953 systemd[1971]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:29:08.301760 systemd[1971]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:29:08.302718 systemd[1971]: Reached target sockets.target - Sockets. Jan 17 00:29:08.306716 systemd[1971]: Reached target basic.target - Basic System. Jan 17 00:29:08.306843 systemd[1971]: Reached target default.target - Main User Target. Jan 17 00:29:08.306885 systemd[1971]: Startup finished in 236ms. Jan 17 00:29:08.307126 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:29:08.313538 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:29:08.316549 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:29:08.427246 waagent[1948]: 2026-01-17T00:29:08.425022Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 17 00:29:08.427246 waagent[1948]: 2026-01-17T00:29:08.425660Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 17 00:29:08.427246 waagent[1948]: 2026-01-17T00:29:08.426648Z INFO Daemon Daemon Python: 3.11.9 Jan 17 00:29:08.428471 waagent[1948]: 2026-01-17T00:29:08.428402Z INFO Daemon Daemon Run daemon Jan 17 00:29:08.429713 waagent[1948]: 2026-01-17T00:29:08.429667Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 17 00:29:08.430618 waagent[1948]: 2026-01-17T00:29:08.430575Z INFO Daemon Daemon Using waagent for provisioning Jan 17 00:29:08.431873 waagent[1948]: 2026-01-17T00:29:08.431834Z INFO Daemon Daemon Activate resource disk Jan 17 00:29:08.432674 waagent[1948]: 2026-01-17T00:29:08.432632Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 17 00:29:08.437384 waagent[1948]: 2026-01-17T00:29:08.437303Z INFO Daemon Daemon Found device: None Jan 17 00:29:08.438663 waagent[1948]: 2026-01-17T00:29:08.438613Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 17 00:29:08.439136 waagent[1948]: 2026-01-17T00:29:08.439097Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 17 00:29:08.441807 waagent[1948]: 2026-01-17T00:29:08.441733Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:29:08.442335 waagent[1948]: 2026-01-17T00:29:08.442294Z INFO Daemon Daemon Running default provisioning handler Jan 17 00:29:08.483174 waagent[1948]: 2026-01-17T00:29:08.483032Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 17 00:29:08.493003 waagent[1948]: 2026-01-17T00:29:08.492456Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 17 00:29:08.498347 waagent[1948]: 2026-01-17T00:29:08.498182Z INFO Daemon Daemon cloud-init is enabled: False Jan 17 00:29:08.506102 waagent[1948]: 2026-01-17T00:29:08.504827Z INFO Daemon Daemon Copying ovf-env.xml Jan 17 00:29:08.554523 waagent[1948]: 2026-01-17T00:29:08.554315Z INFO Daemon Daemon Successfully mounted dvd Jan 17 00:29:08.579398 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 17 00:29:08.583178 waagent[1948]: 2026-01-17T00:29:08.582069Z INFO Daemon Daemon Detect protocol endpoint Jan 17 00:29:08.583178 waagent[1948]: 2026-01-17T00:29:08.582497Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:29:08.584285 waagent[1948]: 2026-01-17T00:29:08.584225Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 17 00:29:08.585173 waagent[1948]: 2026-01-17T00:29:08.585133Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 17 00:29:08.586403 waagent[1948]: 2026-01-17T00:29:08.586355Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 17 00:29:08.587179 waagent[1948]: 2026-01-17T00:29:08.587139Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 17 00:29:08.616006 waagent[1948]: 2026-01-17T00:29:08.615928Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 17 00:29:08.624916 waagent[1948]: 2026-01-17T00:29:08.616517Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 17 00:29:08.624916 waagent[1948]: 2026-01-17T00:29:08.617335Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 17 00:29:08.743789 waagent[1948]: 2026-01-17T00:29:08.740535Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 17 00:29:08.743789 waagent[1948]: 2026-01-17T00:29:08.741056Z INFO Daemon Daemon Forcing an update of the goal state. Jan 17 00:29:08.751015 waagent[1948]: 2026-01-17T00:29:08.750893Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:29:08.775266 kubelet[1958]: E0117 00:29:08.775187 1958 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:29:08.778655 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:29:08.779877 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:29:09.052862 waagent[1948]: 2026-01-17T00:29:09.052630Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 17 00:29:09.061082 waagent[1948]: 2026-01-17T00:29:09.054548Z INFO Daemon Jan 17 00:29:09.061082 waagent[1948]: 2026-01-17T00:29:09.056072Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6827a49e-578f-4dbe-9267-06d8265733fa eTag: 7080466707691692771 source: Fabric] Jan 17 00:29:09.061082 waagent[1948]: 2026-01-17T00:29:09.057777Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 17 00:29:09.061082 waagent[1948]: 2026-01-17T00:29:09.059139Z INFO Daemon Jan 17 00:29:09.061082 waagent[1948]: 2026-01-17T00:29:09.059737Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:29:09.077491 waagent[1948]: 2026-01-17T00:29:09.066826Z INFO Daemon Daemon Downloading artifacts profile blob Jan 17 00:29:09.146455 waagent[1948]: 2026-01-17T00:29:09.146336Z INFO Daemon Downloaded certificate {'thumbprint': '8B9F0E645812564ACBB1269663BBAA74A547CD61', 'hasPrivateKey': True} Jan 17 00:29:09.157537 waagent[1948]: 2026-01-17T00:29:09.147335Z INFO Daemon Fetch goal state completed Jan 17 00:29:09.160236 waagent[1948]: 2026-01-17T00:29:09.160172Z INFO Daemon Daemon Starting provisioning Jan 17 00:29:09.168783 waagent[1948]: 2026-01-17T00:29:09.160536Z INFO Daemon Daemon Handle ovf-env.xml. Jan 17 00:29:09.168783 waagent[1948]: 2026-01-17T00:29:09.161789Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-2e1a0c4804] Jan 17 00:29:09.174131 waagent[1948]: 2026-01-17T00:29:09.174025Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-2e1a0c4804] Jan 17 00:29:09.183105 waagent[1948]: 2026-01-17T00:29:09.174621Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 17 00:29:09.183105 waagent[1948]: 2026-01-17T00:29:09.175666Z INFO Daemon Daemon Primary interface is [eth0] Jan 17 00:29:09.195654 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:29:09.195667 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:29:09.195734 systemd-networkd[1399]: eth0: DHCP lease lost Jan 17 00:29:09.197432 waagent[1948]: 2026-01-17T00:29:09.197315Z INFO Daemon Daemon Create user account if not exists Jan 17 00:29:09.218242 waagent[1948]: 2026-01-17T00:29:09.197831Z INFO Daemon Daemon User core already exists, skip useradd Jan 17 00:29:09.218242 waagent[1948]: 2026-01-17T00:29:09.198357Z INFO Daemon Daemon Configure sudoer Jan 17 00:29:09.218242 waagent[1948]: 2026-01-17T00:29:09.199393Z INFO Daemon Daemon Configure sshd Jan 17 00:29:09.218242 waagent[1948]: 2026-01-17T00:29:09.200373Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 17 00:29:09.218242 waagent[1948]: 2026-01-17T00:29:09.201795Z INFO Daemon Daemon Deploy ssh public key. Jan 17 00:29:09.221143 systemd-networkd[1399]: eth0: DHCPv6 lease lost Jan 17 00:29:09.273870 systemd-networkd[1399]: eth0: DHCPv4 address 10.200.8.33/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 00:29:19.029310 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:29:19.035026 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:29:19.196088 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:29:19.196395 (kubelet)[2040]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:29:19.820670 kubelet[2040]: E0117 00:29:19.820600 2040 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:29:19.824950 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:29:19.825312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:29:29.383518 chronyd[1783]: Selected source PHC0 Jan 17 00:29:30.075609 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:29:30.085089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:29:30.235160 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:29:30.235450 (kubelet)[2060]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:29:30.901654 kubelet[2060]: E0117 00:29:30.901584 2060 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:29:30.905067 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:29:30.905416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:29:39.283134 waagent[1948]: 2026-01-17T00:29:39.283053Z INFO Daemon Daemon Provisioning complete Jan 17 00:29:39.296194 waagent[1948]: 2026-01-17T00:29:39.296119Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 17 00:29:39.303765 waagent[1948]: 2026-01-17T00:29:39.296556Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 17 00:29:39.303765 waagent[1948]: 2026-01-17T00:29:39.297693Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 17 00:29:39.443987 waagent[2068]: 2026-01-17T00:29:39.443849Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 17 00:29:39.444618 waagent[2068]: 2026-01-17T00:29:39.444087Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 17 00:29:39.444618 waagent[2068]: 2026-01-17T00:29:39.444179Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 17 00:29:39.477460 waagent[2068]: 2026-01-17T00:29:39.477318Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 17 00:29:39.477788 waagent[2068]: 2026-01-17T00:29:39.477703Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:29:39.477911 waagent[2068]: 2026-01-17T00:29:39.477863Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:29:39.487330 waagent[2068]: 2026-01-17T00:29:39.487222Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:29:39.498783 waagent[2068]: 2026-01-17T00:29:39.498694Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 17 00:29:39.499562 waagent[2068]: 2026-01-17T00:29:39.499493Z INFO ExtHandler Jan 17 00:29:39.499701 waagent[2068]: 2026-01-17T00:29:39.499623Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 2ade5461-e477-4342-bbac-5e21594f477b eTag: 7080466707691692771 source: Fabric] Jan 17 00:29:39.500149 waagent[2068]: 2026-01-17T00:29:39.500079Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:29:39.500928 waagent[2068]: 2026-01-17T00:29:39.500870Z INFO ExtHandler Jan 17 00:29:39.501021 waagent[2068]: 2026-01-17T00:29:39.500959Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:29:39.505593 waagent[2068]: 2026-01-17T00:29:39.505535Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:29:39.653571 waagent[2068]: 2026-01-17T00:29:39.653439Z INFO ExtHandler Downloaded certificate {'thumbprint': '8B9F0E645812564ACBB1269663BBAA74A547CD61', 'hasPrivateKey': True} Jan 17 00:29:39.654278 waagent[2068]: 2026-01-17T00:29:39.654213Z INFO ExtHandler Fetch goal state completed Jan 17 00:29:39.917259 waagent[2068]: 2026-01-17T00:29:39.917055Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2068 Jan 17 00:29:39.917421 waagent[2068]: 2026-01-17T00:29:39.917355Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 17 00:29:39.919407 waagent[2068]: 2026-01-17T00:29:39.919323Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 17 00:29:39.920456 waagent[2068]: 2026-01-17T00:29:39.919861Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 17 00:29:39.967821 waagent[2068]: 2026-01-17T00:29:39.967737Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 17 00:29:39.968150 waagent[2068]: 2026-01-17T00:29:39.968082Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 17 00:29:39.976526 waagent[2068]: 2026-01-17T00:29:39.976467Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 17 00:29:39.985814 systemd[1]: Reloading requested from client PID 2081 ('systemctl') (unit waagent.service)... Jan 17 00:29:39.985836 systemd[1]: Reloading... Jan 17 00:29:40.109851 zram_generator::config[2118]: No configuration found. Jan 17 00:29:40.245448 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:29:40.329331 systemd[1]: Reloading finished in 342 ms. Jan 17 00:29:40.356304 waagent[2068]: 2026-01-17T00:29:40.356165Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 17 00:29:40.368069 systemd[1]: Reloading requested from client PID 2177 ('systemctl') (unit waagent.service)... Jan 17 00:29:40.368308 systemd[1]: Reloading... Jan 17 00:29:40.455087 zram_generator::config[2213]: No configuration found. Jan 17 00:29:40.616115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:29:40.698132 systemd[1]: Reloading finished in 329 ms. Jan 17 00:29:40.725506 waagent[2068]: 2026-01-17T00:29:40.724143Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 17 00:29:40.725506 waagent[2068]: 2026-01-17T00:29:40.724438Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 17 00:29:40.824651 waagent[2068]: 2026-01-17T00:29:40.824531Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 17 00:29:40.825400 waagent[2068]: 2026-01-17T00:29:40.825323Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 17 00:29:40.826474 waagent[2068]: 2026-01-17T00:29:40.826411Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 17 00:29:40.826621 waagent[2068]: 2026-01-17T00:29:40.826571Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:29:40.826884 waagent[2068]: 2026-01-17T00:29:40.826770Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:29:40.827350 waagent[2068]: 2026-01-17T00:29:40.827298Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 17 00:29:40.827574 waagent[2068]: 2026-01-17T00:29:40.827527Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 17 00:29:40.828206 waagent[2068]: 2026-01-17T00:29:40.828100Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:29:40.828326 waagent[2068]: 2026-01-17T00:29:40.828275Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:29:40.828326 waagent[2068]: 2026-01-17T00:29:40.828345Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 17 00:29:40.828326 waagent[2068]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 17 00:29:40.828326 waagent[2068]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 17 00:29:40.828326 waagent[2068]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 17 00:29:40.828326 waagent[2068]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:29:40.828326 waagent[2068]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:29:40.828326 waagent[2068]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:29:40.828695 waagent[2068]: 2026-01-17T00:29:40.828579Z INFO EnvHandler ExtHandler Configure routes Jan 17 00:29:40.828759 waagent[2068]: 2026-01-17T00:29:40.828677Z INFO EnvHandler ExtHandler Gateway:None Jan 17 00:29:40.828815 waagent[2068]: 2026-01-17T00:29:40.828777Z INFO EnvHandler ExtHandler Routes:None Jan 17 00:29:40.829177 waagent[2068]: 2026-01-17T00:29:40.829119Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 17 00:29:40.829282 waagent[2068]: 2026-01-17T00:29:40.829222Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 17 00:29:40.831156 waagent[2068]: 2026-01-17T00:29:40.830976Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 17 00:29:40.831239 waagent[2068]: 2026-01-17T00:29:40.831133Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 17 00:29:40.831919 waagent[2068]: 2026-01-17T00:29:40.831869Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 17 00:29:40.840388 waagent[2068]: 2026-01-17T00:29:40.840315Z INFO ExtHandler ExtHandler Jan 17 00:29:40.840567 waagent[2068]: 2026-01-17T00:29:40.840465Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 461dcab9-29d2-4a98-bd52-5776637b6b03 correlation a927296e-74da-4725-b624-4bb9c97db8d0 created: 2026-01-17T00:28:32.752563Z] Jan 17 00:29:40.844161 waagent[2068]: 2026-01-17T00:29:40.841052Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:29:40.844161 waagent[2068]: 2026-01-17T00:29:40.841993Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 17 00:29:40.855637 waagent[2068]: 2026-01-17T00:29:40.855565Z INFO MonitorHandler ExtHandler Network interfaces: Jan 17 00:29:40.855637 waagent[2068]: Executing ['ip', '-a', '-o', 'link']: Jan 17 00:29:40.855637 waagent[2068]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 17 00:29:40.855637 waagent[2068]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:55:51 brd ff:ff:ff:ff:ff:ff Jan 17 00:29:40.855637 waagent[2068]: 3: enP38579s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b3:55:51 brd ff:ff:ff:ff:ff:ff\ altname enP38579p0s2 Jan 17 00:29:40.855637 waagent[2068]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 17 00:29:40.855637 waagent[2068]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 17 00:29:40.855637 waagent[2068]: 2: eth0 inet 10.200.8.33/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 17 00:29:40.855637 waagent[2068]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 17 00:29:40.855637 waagent[2068]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 17 00:29:40.855637 waagent[2068]: 2: eth0 inet6 fe80::20d:3aff:feb3:5551/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 00:29:40.898978 waagent[2068]: 2026-01-17T00:29:40.898884Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8D55CA35-D5B4-4CFC-BB44-0EE5B8B0AC79;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 17 00:29:40.943916 waagent[2068]: 2026-01-17T00:29:40.943807Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 17 00:29:40.943916 waagent[2068]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:29:40.943916 waagent[2068]: pkts bytes target prot opt in out source destination Jan 17 00:29:40.943916 waagent[2068]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:29:40.943916 waagent[2068]: pkts bytes target prot opt in out source destination Jan 17 00:29:40.943916 waagent[2068]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:29:40.943916 waagent[2068]: pkts bytes target prot opt in out source destination Jan 17 00:29:40.943916 waagent[2068]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:29:40.943916 waagent[2068]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:29:40.943916 waagent[2068]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:29:40.947996 waagent[2068]: 2026-01-17T00:29:40.947913Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 17 00:29:40.947996 waagent[2068]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:29:40.947996 waagent[2068]: pkts bytes target prot opt in out source destination Jan 17 00:29:40.947996 waagent[2068]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:29:40.947996 waagent[2068]: pkts bytes target prot opt in out source destination Jan 17 00:29:40.947996 waagent[2068]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:29:40.947996 waagent[2068]: pkts bytes target prot opt in out source destination Jan 17 00:29:40.947996 waagent[2068]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:29:40.947996 waagent[2068]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:29:40.947996 waagent[2068]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:29:40.948453 waagent[2068]: 2026-01-17T00:29:40.948323Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 17 00:29:41.148821 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:29:41.156095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:29:41.397030 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:29:41.403236 (kubelet)[2318]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:29:41.993004 kubelet[2318]: E0117 00:29:41.992898 2318 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:29:41.996115 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:29:41.996460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:29:49.197686 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:29:49.204143 systemd[1]: Started sshd@0-10.200.8.33:22-10.200.16.10:42230.service - OpenSSH per-connection server daemon (10.200.16.10:42230). Jan 17 00:29:49.861767 sshd[2327]: Accepted publickey for core from 10.200.16.10 port 42230 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:29:49.863632 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:49.869514 systemd-logind[1811]: New session 3 of user core. Jan 17 00:29:49.876273 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:29:50.423114 systemd[1]: Started sshd@1-10.200.8.33:22-10.200.16.10:57268.service - OpenSSH per-connection server daemon (10.200.16.10:57268). Jan 17 00:29:50.882354 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 17 00:29:51.064033 sshd[2332]: Accepted publickey for core from 10.200.16.10 port 57268 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:29:51.065875 sshd[2332]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:51.072036 systemd-logind[1811]: New session 4 of user core. Jan 17 00:29:51.082253 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:29:51.229602 update_engine[1813]: I20260117 00:29:51.229468 1813 update_attempter.cc:509] Updating boot flags... Jan 17 00:29:51.293785 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2348) Jan 17 00:29:51.424001 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2348) Jan 17 00:29:51.529472 sshd[2332]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:51.533812 systemd[1]: sshd@1-10.200.8.33:22-10.200.16.10:57268.service: Deactivated successfully. Jan 17 00:29:51.540826 systemd-logind[1811]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:29:51.541490 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:29:51.542713 systemd-logind[1811]: Removed session 4. Jan 17 00:29:51.644513 systemd[1]: Started sshd@2-10.200.8.33:22-10.200.16.10:57284.service - OpenSSH per-connection server daemon (10.200.16.10:57284). Jan 17 00:29:52.148848 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:29:52.155240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:29:52.290221 sshd[2406]: Accepted publickey for core from 10.200.16.10 port 57284 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:29:52.293624 sshd[2406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:52.300153 systemd-logind[1811]: New session 5 of user core. Jan 17 00:29:52.312871 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:29:52.320980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:29:52.324185 (kubelet)[2420]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:29:52.372992 kubelet[2420]: E0117 00:29:52.372916 2420 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:29:52.376293 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:29:52.376677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:29:52.749573 sshd[2406]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:52.753545 systemd[1]: sshd@2-10.200.8.33:22-10.200.16.10:57284.service: Deactivated successfully. Jan 17 00:29:52.760412 systemd-logind[1811]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:29:52.761019 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:29:52.762826 systemd-logind[1811]: Removed session 5. Jan 17 00:29:52.866537 systemd[1]: Started sshd@3-10.200.8.33:22-10.200.16.10:57300.service - OpenSSH per-connection server daemon (10.200.16.10:57300). Jan 17 00:29:53.514851 sshd[2434]: Accepted publickey for core from 10.200.16.10 port 57300 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:29:53.516924 sshd[2434]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:53.523231 systemd-logind[1811]: New session 6 of user core. Jan 17 00:29:53.529181 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:29:53.978821 sshd[2434]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:53.982720 systemd[1]: sshd@3-10.200.8.33:22-10.200.16.10:57300.service: Deactivated successfully. Jan 17 00:29:53.988041 systemd-logind[1811]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:29:53.988500 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:29:53.990052 systemd-logind[1811]: Removed session 6. Jan 17 00:29:54.087437 systemd[1]: Started sshd@4-10.200.8.33:22-10.200.16.10:57306.service - OpenSSH per-connection server daemon (10.200.16.10:57306). Jan 17 00:29:54.717626 sshd[2442]: Accepted publickey for core from 10.200.16.10 port 57306 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:29:54.719402 sshd[2442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:54.725176 systemd-logind[1811]: New session 7 of user core. Jan 17 00:29:54.732179 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:29:55.104943 sudo[2446]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:29:55.105352 sudo[2446]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:29:55.122030 sudo[2446]: pam_unix(sudo:session): session closed for user root Jan 17 00:29:55.227176 sshd[2442]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:55.231539 systemd[1]: sshd@4-10.200.8.33:22-10.200.16.10:57306.service: Deactivated successfully. Jan 17 00:29:55.237980 systemd-logind[1811]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:29:55.238373 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:29:55.240466 systemd-logind[1811]: Removed session 7. Jan 17 00:29:55.336198 systemd[1]: Started sshd@5-10.200.8.33:22-10.200.16.10:57320.service - OpenSSH per-connection server daemon (10.200.16.10:57320). Jan 17 00:29:55.979872 sshd[2451]: Accepted publickey for core from 10.200.16.10 port 57320 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:29:55.981819 sshd[2451]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:55.987591 systemd-logind[1811]: New session 8 of user core. Jan 17 00:29:55.994178 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:29:56.332214 sudo[2456]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:29:56.332637 sudo[2456]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:29:56.337147 sudo[2456]: pam_unix(sudo:session): session closed for user root Jan 17 00:29:56.343860 sudo[2455]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:29:56.344282 sudo[2455]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:29:56.361539 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:29:56.371278 auditctl[2459]: No rules Jan 17 00:29:56.371894 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:29:56.372316 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:29:56.379596 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:29:56.424577 augenrules[2478]: No rules Jan 17 00:29:56.427971 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:29:56.431431 sudo[2455]: pam_unix(sudo:session): session closed for user root Jan 17 00:29:56.534528 sshd[2451]: pam_unix(sshd:session): session closed for user core Jan 17 00:29:56.541063 systemd[1]: sshd@5-10.200.8.33:22-10.200.16.10:57320.service: Deactivated successfully. Jan 17 00:29:56.544539 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:29:56.545578 systemd-logind[1811]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:29:56.546899 systemd-logind[1811]: Removed session 8. Jan 17 00:29:56.646269 systemd[1]: Started sshd@6-10.200.8.33:22-10.200.16.10:57322.service - OpenSSH per-connection server daemon (10.200.16.10:57322). Jan 17 00:29:57.290973 sshd[2487]: Accepted publickey for core from 10.200.16.10 port 57322 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:29:57.292957 sshd[2487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:29:57.298638 systemd-logind[1811]: New session 9 of user core. Jan 17 00:29:57.304542 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:29:57.642704 sudo[2491]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:29:57.643130 sudo[2491]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:29:58.172403 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:29:58.181345 (dockerd)[2506]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:29:58.693367 dockerd[2506]: time="2026-01-17T00:29:58.693288134Z" level=info msg="Starting up" Jan 17 00:29:58.881018 systemd[1]: var-lib-docker-metacopy\x2dcheck2474732171-merged.mount: Deactivated successfully. Jan 17 00:29:58.902602 dockerd[2506]: time="2026-01-17T00:29:58.902528505Z" level=info msg="Loading containers: start." Jan 17 00:29:59.022967 kernel: Initializing XFRM netlink socket Jan 17 00:29:59.097565 systemd-networkd[1399]: docker0: Link UP Jan 17 00:29:59.138557 dockerd[2506]: time="2026-01-17T00:29:59.138499745Z" level=info msg="Loading containers: done." Jan 17 00:29:59.180627 dockerd[2506]: time="2026-01-17T00:29:59.180548518Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:29:59.180902 dockerd[2506]: time="2026-01-17T00:29:59.180768323Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:29:59.180996 dockerd[2506]: time="2026-01-17T00:29:59.180953928Z" level=info msg="Daemon has completed initialization" Jan 17 00:29:59.239661 dockerd[2506]: time="2026-01-17T00:29:59.239582324Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:29:59.239864 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:30:00.344199 containerd[1828]: time="2026-01-17T00:30:00.344143710Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:30:01.085118 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2765591384.mount: Deactivated successfully. Jan 17 00:30:02.398527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:30:02.410104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:30:02.619510 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:30:02.632084 (kubelet)[2713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:30:03.257559 kubelet[2713]: E0117 00:30:03.257437 2713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:30:03.261417 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:30:03.262044 containerd[1828]: time="2026-01-17T00:30:03.261716754Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:03.264622 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:30:03.267736 containerd[1828]: time="2026-01-17T00:30:03.267580803Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=29070655" Jan 17 00:30:03.269834 containerd[1828]: time="2026-01-17T00:30:03.269729158Z" level=info msg="ImageCreate event name:\"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:03.275136 containerd[1828]: time="2026-01-17T00:30:03.275043894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:03.276254 containerd[1828]: time="2026-01-17T00:30:03.275919116Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"29067246\" in 2.931722105s" Jan 17 00:30:03.276254 containerd[1828]: time="2026-01-17T00:30:03.275977317Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:7757c58248a29fc7474a8072796848689852b0477adf16765f38b3d1a9bacadf\"" Jan 17 00:30:03.277198 containerd[1828]: time="2026-01-17T00:30:03.277168948Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:30:05.044038 containerd[1828]: time="2026-01-17T00:30:05.043955326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:05.047052 containerd[1828]: time="2026-01-17T00:30:05.046911701Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=24993362" Jan 17 00:30:05.050051 containerd[1828]: time="2026-01-17T00:30:05.049965779Z" level=info msg="ImageCreate event name:\"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:05.056634 containerd[1828]: time="2026-01-17T00:30:05.056198038Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:05.057603 containerd[1828]: time="2026-01-17T00:30:05.057531072Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"26650388\" in 1.780322023s" Jan 17 00:30:05.057702 containerd[1828]: time="2026-01-17T00:30:05.057612174Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:0175d0a8243db520e3caa6d5c1e4248fddbc32447a9e8b5f4630831bc1e2489e\"" Jan 17 00:30:05.058647 containerd[1828]: time="2026-01-17T00:30:05.058613800Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:30:06.393606 containerd[1828]: time="2026-01-17T00:30:06.393529859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:06.398861 containerd[1828]: time="2026-01-17T00:30:06.398769493Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=19405084" Jan 17 00:30:06.407084 containerd[1828]: time="2026-01-17T00:30:06.406979202Z" level=info msg="ImageCreate event name:\"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:06.414699 containerd[1828]: time="2026-01-17T00:30:06.414602897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:06.416047 containerd[1828]: time="2026-01-17T00:30:06.415854529Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"21062128\" in 1.357199127s" Jan 17 00:30:06.416047 containerd[1828]: time="2026-01-17T00:30:06.415906730Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:23d6a1fb92fda53b787f364351c610e55f073e8bdf0de5831974df7875b13f21\"" Jan 17 00:30:06.417133 containerd[1828]: time="2026-01-17T00:30:06.417100661Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:30:07.712054 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1376217791.mount: Deactivated successfully. Jan 17 00:30:08.276458 containerd[1828]: time="2026-01-17T00:30:08.276386899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:08.278589 containerd[1828]: time="2026-01-17T00:30:08.278519853Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=31161907" Jan 17 00:30:08.282195 containerd[1828]: time="2026-01-17T00:30:08.282151146Z" level=info msg="ImageCreate event name:\"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:08.286413 containerd[1828]: time="2026-01-17T00:30:08.286336053Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:08.287706 containerd[1828]: time="2026-01-17T00:30:08.287145573Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"31160918\" in 1.870007312s" Jan 17 00:30:08.287706 containerd[1828]: time="2026-01-17T00:30:08.287195075Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:4d8fb2dc5751966f058943ff7c5f10551e603d726ab8648c7c7b7f95a2663e3d\"" Jan 17 00:30:08.288068 containerd[1828]: time="2026-01-17T00:30:08.287998395Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:30:08.926854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206748450.mount: Deactivated successfully. Jan 17 00:30:10.174835 containerd[1828]: time="2026-01-17T00:30:10.174765806Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:10.177517 containerd[1828]: time="2026-01-17T00:30:10.177297062Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jan 17 00:30:10.181624 containerd[1828]: time="2026-01-17T00:30:10.181178048Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:10.187320 containerd[1828]: time="2026-01-17T00:30:10.187245182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:10.188968 containerd[1828]: time="2026-01-17T00:30:10.188910319Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.900870423s" Jan 17 00:30:10.189170 containerd[1828]: time="2026-01-17T00:30:10.189144924Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 17 00:30:10.190233 containerd[1828]: time="2026-01-17T00:30:10.190190047Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:30:10.837670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394673408.mount: Deactivated successfully. Jan 17 00:30:10.861398 containerd[1828]: time="2026-01-17T00:30:10.861322819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:10.865871 containerd[1828]: time="2026-01-17T00:30:10.865769318Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 17 00:30:10.869616 containerd[1828]: time="2026-01-17T00:30:10.869525701Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:10.875096 containerd[1828]: time="2026-01-17T00:30:10.875014623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:10.876072 containerd[1828]: time="2026-01-17T00:30:10.875867641Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 685.639293ms" Jan 17 00:30:10.876072 containerd[1828]: time="2026-01-17T00:30:10.875918943Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 17 00:30:10.876834 containerd[1828]: time="2026-01-17T00:30:10.876800462Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:30:11.541452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669475417.mount: Deactivated successfully. Jan 17 00:30:13.399027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 00:30:13.407161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:30:13.606720 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:30:13.622098 (kubelet)[2856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:30:14.249596 kubelet[2856]: E0117 00:30:14.249457 2856 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:30:14.253610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:30:14.253924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:30:14.536424 containerd[1828]: time="2026-01-17T00:30:14.536230153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:14.548070 containerd[1828]: time="2026-01-17T00:30:14.547985913Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Jan 17 00:30:14.551685 containerd[1828]: time="2026-01-17T00:30:14.551594993Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:14.556486 containerd[1828]: time="2026-01-17T00:30:14.556427300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:14.557883 containerd[1828]: time="2026-01-17T00:30:14.557617727Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.680778863s" Jan 17 00:30:14.557883 containerd[1828]: time="2026-01-17T00:30:14.557674928Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 17 00:30:17.445001 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:30:17.459090 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:30:17.513495 systemd[1]: Reloading requested from client PID 2896 ('systemctl') (unit session-9.scope)... Jan 17 00:30:17.513522 systemd[1]: Reloading... Jan 17 00:30:17.679111 zram_generator::config[2937]: No configuration found. Jan 17 00:30:17.814275 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:30:17.897831 systemd[1]: Reloading finished in 383 ms. Jan 17 00:30:17.945463 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:30:17.945575 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:30:17.946046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:30:17.954979 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:30:18.264000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:30:18.275267 (kubelet)[3015]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:30:18.318403 kubelet[3015]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:30:18.318403 kubelet[3015]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:30:18.318403 kubelet[3015]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:30:18.319095 kubelet[3015]: I0117 00:30:18.318505 3015 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:30:19.080631 kubelet[3015]: I0117 00:30:19.080235 3015 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:30:19.080631 kubelet[3015]: I0117 00:30:19.080303 3015 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:30:19.081128 kubelet[3015]: I0117 00:30:19.081006 3015 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:30:19.187990 kubelet[3015]: E0117 00:30:19.187932 3015 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.33:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.33:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:30:19.189124 kubelet[3015]: I0117 00:30:19.189066 3015 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:30:19.201893 kubelet[3015]: E0117 00:30:19.201836 3015 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:30:19.201893 kubelet[3015]: I0117 00:30:19.201884 3015 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:30:19.205923 kubelet[3015]: I0117 00:30:19.205890 3015 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:30:19.206423 kubelet[3015]: I0117 00:30:19.206365 3015 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:30:19.206640 kubelet[3015]: I0117 00:30:19.206411 3015 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2e1a0c4804","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:30:19.206854 kubelet[3015]: I0117 00:30:19.206649 3015 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:30:19.206854 kubelet[3015]: I0117 00:30:19.206664 3015 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:30:19.206942 kubelet[3015]: I0117 00:30:19.206860 3015 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:30:19.210126 kubelet[3015]: I0117 00:30:19.210096 3015 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:30:19.212238 kubelet[3015]: I0117 00:30:19.211770 3015 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:30:19.212238 kubelet[3015]: I0117 00:30:19.211825 3015 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:30:19.212238 kubelet[3015]: I0117 00:30:19.211844 3015 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:30:19.214419 kubelet[3015]: W0117 00:30:19.213711 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2e1a0c4804&limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Jan 17 00:30:19.214419 kubelet[3015]: E0117 00:30:19.213829 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2e1a0c4804&limit=500&resourceVersion=0\": dial tcp 10.200.8.33:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:30:19.214555 kubelet[3015]: W0117 00:30:19.214522 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Jan 17 00:30:19.214608 kubelet[3015]: E0117 00:30:19.214561 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.33:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:30:19.215182 kubelet[3015]: I0117 00:30:19.215157 3015 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:30:19.215870 kubelet[3015]: I0117 00:30:19.215847 3015 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:30:19.216879 kubelet[3015]: W0117 00:30:19.216857 3015 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:30:19.220678 kubelet[3015]: I0117 00:30:19.220187 3015 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:30:19.220678 kubelet[3015]: I0117 00:30:19.220234 3015 server.go:1287] "Started kubelet" Jan 17 00:30:19.227784 kubelet[3015]: E0117 00:30:19.226107 3015 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.33:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.33:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-2e1a0c4804.188b5d4294e36359 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-2e1a0c4804,UID:ci-4081.3.6-n-2e1a0c4804,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-2e1a0c4804,},FirstTimestamp:2026-01-17 00:30:19.220206425 +0000 UTC m=+0.940386947,LastTimestamp:2026-01-17 00:30:19.220206425 +0000 UTC m=+0.940386947,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-2e1a0c4804,}" Jan 17 00:30:19.227784 kubelet[3015]: I0117 00:30:19.227677 3015 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:30:19.231886 kubelet[3015]: I0117 00:30:19.231827 3015 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:30:19.233248 kubelet[3015]: I0117 00:30:19.233214 3015 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:30:19.235932 kubelet[3015]: I0117 00:30:19.235528 3015 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:30:19.235932 kubelet[3015]: I0117 00:30:19.235555 3015 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:30:19.235932 kubelet[3015]: I0117 00:30:19.235905 3015 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:30:19.236141 kubelet[3015]: E0117 00:30:19.235914 3015 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-2e1a0c4804\" not found" Jan 17 00:30:19.236219 kubelet[3015]: I0117 00:30:19.236199 3015 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:30:19.237537 kubelet[3015]: I0117 00:30:19.236680 3015 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:30:19.237537 kubelet[3015]: I0117 00:30:19.236760 3015 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:30:19.237537 kubelet[3015]: W0117 00:30:19.237160 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Jan 17 00:30:19.237537 kubelet[3015]: E0117 00:30:19.237219 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.33:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:30:19.237537 kubelet[3015]: E0117 00:30:19.237296 3015 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2e1a0c4804?timeout=10s\": dial tcp 10.200.8.33:6443: connect: connection refused" interval="200ms" Jan 17 00:30:19.238909 kubelet[3015]: I0117 00:30:19.238889 3015 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:30:19.239097 kubelet[3015]: I0117 00:30:19.239077 3015 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:30:19.240136 kubelet[3015]: E0117 00:30:19.240115 3015 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:30:19.240470 kubelet[3015]: I0117 00:30:19.240453 3015 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:30:19.271665 kubelet[3015]: I0117 00:30:19.271595 3015 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:30:19.273391 kubelet[3015]: I0117 00:30:19.273355 3015 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:30:19.273391 kubelet[3015]: I0117 00:30:19.273394 3015 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:30:19.273557 kubelet[3015]: I0117 00:30:19.273423 3015 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:30:19.273557 kubelet[3015]: I0117 00:30:19.273434 3015 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:30:19.273557 kubelet[3015]: E0117 00:30:19.273501 3015 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:30:19.281710 kubelet[3015]: W0117 00:30:19.281659 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Jan 17 00:30:19.281891 kubelet[3015]: E0117 00:30:19.281719 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.33:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:30:19.298128 kubelet[3015]: I0117 00:30:19.298090 3015 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:30:19.298128 kubelet[3015]: I0117 00:30:19.298112 3015 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:30:19.298382 kubelet[3015]: I0117 00:30:19.298155 3015 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:30:19.303525 kubelet[3015]: I0117 00:30:19.303484 3015 policy_none.go:49] "None policy: Start" Jan 17 00:30:19.303525 kubelet[3015]: I0117 00:30:19.303522 3015 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:30:19.303525 kubelet[3015]: I0117 00:30:19.303540 3015 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:30:19.312556 kubelet[3015]: I0117 00:30:19.312510 3015 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:30:19.314776 kubelet[3015]: I0117 00:30:19.312825 3015 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:30:19.314776 kubelet[3015]: I0117 00:30:19.312849 3015 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:30:19.315719 kubelet[3015]: I0117 00:30:19.315505 3015 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:30:19.316791 kubelet[3015]: E0117 00:30:19.316771 3015 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:30:19.316947 kubelet[3015]: E0117 00:30:19.316935 3015 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-2e1a0c4804\" not found" Jan 17 00:30:19.381048 kubelet[3015]: E0117 00:30:19.380917 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2e1a0c4804\" not found" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.388516 kubelet[3015]: E0117 00:30:19.388492 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2e1a0c4804\" not found" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.390780 kubelet[3015]: E0117 00:30:19.390306 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2e1a0c4804\" not found" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.415687 kubelet[3015]: I0117 00:30:19.415649 3015 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.416163 kubelet[3015]: E0117 00:30:19.416128 3015 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.33:6443/api/v1/nodes\": dial tcp 10.200.8.33:6443: connect: connection refused" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.438306 kubelet[3015]: I0117 00:30:19.437994 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/040047ea7b4e3ae9c1aa409786f62c5b-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2e1a0c4804\" (UID: \"040047ea7b4e3ae9c1aa409786f62c5b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.438306 kubelet[3015]: I0117 00:30:19.438063 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/040047ea7b4e3ae9c1aa409786f62c5b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2e1a0c4804\" (UID: \"040047ea7b4e3ae9c1aa409786f62c5b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.438306 kubelet[3015]: I0117 00:30:19.438101 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ecf216dd73a77cdde841213e5cb4f6b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2e1a0c4804\" (UID: \"5ecf216dd73a77cdde841213e5cb4f6b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.438306 kubelet[3015]: I0117 00:30:19.438128 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ecf216dd73a77cdde841213e5cb4f6b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2e1a0c4804\" (UID: \"5ecf216dd73a77cdde841213e5cb4f6b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.438306 kubelet[3015]: I0117 00:30:19.438157 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c03585f4b8f2e1f264d9c5b7de15051-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2e1a0c4804\" (UID: \"9c03585f4b8f2e1f264d9c5b7de15051\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.438698 kubelet[3015]: E0117 00:30:19.438170 3015 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2e1a0c4804?timeout=10s\": dial tcp 10.200.8.33:6443: connect: connection refused" interval="400ms" Jan 17 00:30:19.438698 kubelet[3015]: I0117 00:30:19.438185 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/040047ea7b4e3ae9c1aa409786f62c5b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2e1a0c4804\" (UID: \"040047ea7b4e3ae9c1aa409786f62c5b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.438698 kubelet[3015]: I0117 00:30:19.438249 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ecf216dd73a77cdde841213e5cb4f6b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2e1a0c4804\" (UID: \"5ecf216dd73a77cdde841213e5cb4f6b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.438698 kubelet[3015]: I0117 00:30:19.438279 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ecf216dd73a77cdde841213e5cb4f6b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2e1a0c4804\" (UID: \"5ecf216dd73a77cdde841213e5cb4f6b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.438698 kubelet[3015]: I0117 00:30:19.438308 3015 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ecf216dd73a77cdde841213e5cb4f6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2e1a0c4804\" (UID: \"5ecf216dd73a77cdde841213e5cb4f6b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.619697 kubelet[3015]: I0117 00:30:19.619230 3015 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.619697 kubelet[3015]: E0117 00:30:19.619643 3015 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.33:6443/api/v1/nodes\": dial tcp 10.200.8.33:6443: connect: connection refused" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:19.688951 containerd[1828]: time="2026-01-17T00:30:19.688879257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2e1a0c4804,Uid:040047ea7b4e3ae9c1aa409786f62c5b,Namespace:kube-system,Attempt:0,}" Jan 17 00:30:19.690181 containerd[1828]: time="2026-01-17T00:30:19.690136991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2e1a0c4804,Uid:5ecf216dd73a77cdde841213e5cb4f6b,Namespace:kube-system,Attempt:0,}" Jan 17 00:30:19.692271 containerd[1828]: time="2026-01-17T00:30:19.692078542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2e1a0c4804,Uid:9c03585f4b8f2e1f264d9c5b7de15051,Namespace:kube-system,Attempt:0,}" Jan 17 00:30:19.839540 kubelet[3015]: E0117 00:30:19.839488 3015 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2e1a0c4804?timeout=10s\": dial tcp 10.200.8.33:6443: connect: connection refused" interval="800ms" Jan 17 00:30:20.022695 kubelet[3015]: I0117 00:30:20.022549 3015 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:20.023349 kubelet[3015]: E0117 00:30:20.023306 3015 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.33:6443/api/v1/nodes\": dial tcp 10.200.8.33:6443: connect: connection refused" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:20.200569 kubelet[3015]: W0117 00:30:20.200515 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Jan 17 00:30:20.200790 kubelet[3015]: E0117 00:30:20.200585 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.33:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.33:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:30:20.314720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1670231089.mount: Deactivated successfully. Jan 17 00:30:20.338245 containerd[1828]: time="2026-01-17T00:30:20.338169181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:30:20.341056 containerd[1828]: time="2026-01-17T00:30:20.340994356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 00:30:20.343764 containerd[1828]: time="2026-01-17T00:30:20.343715329Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:30:20.346314 containerd[1828]: time="2026-01-17T00:30:20.346262996Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:30:20.349008 containerd[1828]: time="2026-01-17T00:30:20.348964868Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:30:20.352573 containerd[1828]: time="2026-01-17T00:30:20.352528762Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:30:20.357200 containerd[1828]: time="2026-01-17T00:30:20.357096783Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:30:20.361121 containerd[1828]: time="2026-01-17T00:30:20.361060889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:30:20.362395 containerd[1828]: time="2026-01-17T00:30:20.362023514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.027754ms" Jan 17 00:30:20.363614 containerd[1828]: time="2026-01-17T00:30:20.363564655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.345062ms" Jan 17 00:30:20.365802 containerd[1828]: time="2026-01-17T00:30:20.365766413Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 673.605869ms" Jan 17 00:30:20.512357 kubelet[3015]: W0117 00:30:20.512269 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Jan 17 00:30:20.512880 kubelet[3015]: E0117 00:30:20.512368 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.33:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.33:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:30:20.521639 kubelet[3015]: W0117 00:30:20.521520 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Jan 17 00:30:20.521910 kubelet[3015]: E0117 00:30:20.521675 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.33:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.33:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:30:20.603987 kubelet[3015]: W0117 00:30:20.603789 3015 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2e1a0c4804&limit=500&resourceVersion=0": dial tcp 10.200.8.33:6443: connect: connection refused Jan 17 00:30:20.603987 kubelet[3015]: E0117 00:30:20.603892 3015 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.33:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-2e1a0c4804&limit=500&resourceVersion=0\": dial tcp 10.200.8.33:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:30:20.641040 kubelet[3015]: E0117 00:30:20.640980 3015 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.33:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-2e1a0c4804?timeout=10s\": dial tcp 10.200.8.33:6443: connect: connection refused" interval="1.6s" Jan 17 00:30:20.707005 containerd[1828]: time="2026-01-17T00:30:20.706470251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:30:20.708465 containerd[1828]: time="2026-01-17T00:30:20.708003292Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:30:20.708465 containerd[1828]: time="2026-01-17T00:30:20.708100095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:20.708465 containerd[1828]: time="2026-01-17T00:30:20.708227798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:20.709637 containerd[1828]: time="2026-01-17T00:30:20.709457131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:30:20.709637 containerd[1828]: time="2026-01-17T00:30:20.709540833Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:30:20.709637 containerd[1828]: time="2026-01-17T00:30:20.709588734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:20.709840 containerd[1828]: time="2026-01-17T00:30:20.709763439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:20.715889 containerd[1828]: time="2026-01-17T00:30:20.715808799Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:30:20.716092 containerd[1828]: time="2026-01-17T00:30:20.716020105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:30:20.716336 containerd[1828]: time="2026-01-17T00:30:20.716078106Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:20.716433 containerd[1828]: time="2026-01-17T00:30:20.716316913Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:20.830281 kubelet[3015]: I0117 00:30:20.829704 3015 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:20.830281 kubelet[3015]: E0117 00:30:20.830232 3015 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.33:6443/api/v1/nodes\": dial tcp 10.200.8.33:6443: connect: connection refused" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:20.836024 containerd[1828]: time="2026-01-17T00:30:20.835965387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-2e1a0c4804,Uid:9c03585f4b8f2e1f264d9c5b7de15051,Namespace:kube-system,Attempt:0,} returns sandbox id \"4b53abf2e6f86e942a0e3940b324eb32891881dda70ad7faf20a4563066428ac\"" Jan 17 00:30:20.853828 containerd[1828]: time="2026-01-17T00:30:20.853170243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-2e1a0c4804,Uid:5ecf216dd73a77cdde841213e5cb4f6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"c18971a329c55976c99bd597bb6d4643725ec93dd196af5f12beb7a7eac28774\"" Jan 17 00:30:20.861282 containerd[1828]: time="2026-01-17T00:30:20.860247331Z" level=info msg="CreateContainer within sandbox \"4b53abf2e6f86e942a0e3940b324eb32891881dda70ad7faf20a4563066428ac\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:30:20.865240 containerd[1828]: time="2026-01-17T00:30:20.864954056Z" level=info msg="CreateContainer within sandbox \"c18971a329c55976c99bd597bb6d4643725ec93dd196af5f12beb7a7eac28774\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:30:20.876881 containerd[1828]: time="2026-01-17T00:30:20.876831771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-2e1a0c4804,Uid:040047ea7b4e3ae9c1aa409786f62c5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b843e1b32d6f26628dc3fd9f343bd0349282a7f5b1bcc219ca03158933a4cd17\"" Jan 17 00:30:20.882245 containerd[1828]: time="2026-01-17T00:30:20.882202313Z" level=info msg="CreateContainer within sandbox \"b843e1b32d6f26628dc3fd9f343bd0349282a7f5b1bcc219ca03158933a4cd17\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:30:20.919369 containerd[1828]: time="2026-01-17T00:30:20.919301797Z" level=info msg="CreateContainer within sandbox \"4b53abf2e6f86e942a0e3940b324eb32891881dda70ad7faf20a4563066428ac\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6275337bb062c7bd5f12eec6b53acbecef54c7554f4b2abf8c3e34628a8c137c\"" Jan 17 00:30:20.920288 containerd[1828]: time="2026-01-17T00:30:20.920228922Z" level=info msg="StartContainer for \"6275337bb062c7bd5f12eec6b53acbecef54c7554f4b2abf8c3e34628a8c137c\"" Jan 17 00:30:20.931787 containerd[1828]: time="2026-01-17T00:30:20.931196313Z" level=info msg="CreateContainer within sandbox \"c18971a329c55976c99bd597bb6d4643725ec93dd196af5f12beb7a7eac28774\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e2f3ea4973993dd17950ba30e5db7356033fcf390039d354da7d50fae68635b0\"" Jan 17 00:30:20.932226 containerd[1828]: time="2026-01-17T00:30:20.932203140Z" level=info msg="StartContainer for \"e2f3ea4973993dd17950ba30e5db7356033fcf390039d354da7d50fae68635b0\"" Jan 17 00:30:20.940437 containerd[1828]: time="2026-01-17T00:30:20.940378556Z" level=info msg="CreateContainer within sandbox \"b843e1b32d6f26628dc3fd9f343bd0349282a7f5b1bcc219ca03158933a4cd17\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1e234ea41e174374683c4c608f3f0f9d505de4749fec6c9d1610b31c8e0b67f0\"" Jan 17 00:30:20.941307 containerd[1828]: time="2026-01-17T00:30:20.941268980Z" level=info msg="StartContainer for \"1e234ea41e174374683c4c608f3f0f9d505de4749fec6c9d1610b31c8e0b67f0\"" Jan 17 00:30:21.118416 containerd[1828]: time="2026-01-17T00:30:21.118255075Z" level=info msg="StartContainer for \"6275337bb062c7bd5f12eec6b53acbecef54c7554f4b2abf8c3e34628a8c137c\" returns successfully" Jan 17 00:30:21.118579 containerd[1828]: time="2026-01-17T00:30:21.118469181Z" level=info msg="StartContainer for \"e2f3ea4973993dd17950ba30e5db7356033fcf390039d354da7d50fae68635b0\" returns successfully" Jan 17 00:30:21.137777 containerd[1828]: time="2026-01-17T00:30:21.136825368Z" level=info msg="StartContainer for \"1e234ea41e174374683c4c608f3f0f9d505de4749fec6c9d1610b31c8e0b67f0\" returns successfully" Jan 17 00:30:21.299496 kubelet[3015]: E0117 00:30:21.299450 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2e1a0c4804\" not found" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:21.308541 kubelet[3015]: E0117 00:30:21.308493 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2e1a0c4804\" not found" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:21.315863 kubelet[3015]: E0117 00:30:21.315818 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2e1a0c4804\" not found" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:22.320820 kubelet[3015]: E0117 00:30:22.320769 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2e1a0c4804\" not found" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:22.324792 kubelet[3015]: E0117 00:30:22.323871 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2e1a0c4804\" not found" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:22.324792 kubelet[3015]: E0117 00:30:22.324477 3015 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-2e1a0c4804\" not found" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:22.435139 kubelet[3015]: I0117 00:30:22.435101 3015 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:23.917935 kubelet[3015]: E0117 00:30:23.917869 3015 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-2e1a0c4804\" not found" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:23.994339 kubelet[3015]: I0117 00:30:23.994287 3015 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:24.037643 kubelet[3015]: I0117 00:30:24.037582 3015 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:24.053457 kubelet[3015]: E0117 00:30:24.053394 3015 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-2e1a0c4804\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:24.053457 kubelet[3015]: I0117 00:30:24.053454 3015 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:24.055753 kubelet[3015]: E0117 00:30:24.055704 3015 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-2e1a0c4804\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:24.055895 kubelet[3015]: I0117 00:30:24.055760 3015 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:24.066508 kubelet[3015]: E0117 00:30:24.066396 3015 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2e1a0c4804\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:24.217989 kubelet[3015]: I0117 00:30:24.217116 3015 apiserver.go:52] "Watching apiserver" Jan 17 00:30:24.237173 kubelet[3015]: I0117 00:30:24.237102 3015 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:30:25.340356 kubelet[3015]: I0117 00:30:25.340303 3015 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:25.346571 kubelet[3015]: W0117 00:30:25.346508 3015 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:30:26.118287 systemd[1]: Reloading requested from client PID 3289 ('systemctl') (unit session-9.scope)... Jan 17 00:30:26.118309 systemd[1]: Reloading... Jan 17 00:30:26.230846 zram_generator::config[3329]: No configuration found. Jan 17 00:30:26.389163 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:30:26.479337 systemd[1]: Reloading finished in 360 ms. Jan 17 00:30:26.520991 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:30:26.540302 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:30:26.540707 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:30:26.551384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:30:26.701104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:30:26.703906 (kubelet)[3406]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:30:26.761350 kubelet[3406]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:30:26.761834 kubelet[3406]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:30:26.761834 kubelet[3406]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:30:26.762016 kubelet[3406]: I0117 00:30:26.761961 3406 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:30:26.769807 kubelet[3406]: I0117 00:30:26.769682 3406 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:30:26.769807 kubelet[3406]: I0117 00:30:26.769717 3406 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:30:26.770269 kubelet[3406]: I0117 00:30:26.770243 3406 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:30:26.771666 kubelet[3406]: I0117 00:30:26.771635 3406 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:30:26.775928 kubelet[3406]: I0117 00:30:26.775292 3406 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:30:26.780767 kubelet[3406]: E0117 00:30:26.779317 3406 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:30:26.780767 kubelet[3406]: I0117 00:30:26.779362 3406 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:30:26.785444 kubelet[3406]: I0117 00:30:26.785403 3406 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:30:26.786427 kubelet[3406]: I0117 00:30:26.786381 3406 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:30:26.787382 kubelet[3406]: I0117 00:30:26.786541 3406 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-2e1a0c4804","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:30:26.787605 kubelet[3406]: I0117 00:30:26.787402 3406 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:30:26.787605 kubelet[3406]: I0117 00:30:26.787418 3406 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:30:26.787605 kubelet[3406]: I0117 00:30:26.787496 3406 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:30:26.787733 kubelet[3406]: I0117 00:30:26.787718 3406 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:30:26.787801 kubelet[3406]: I0117 00:30:26.787784 3406 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:30:26.787846 kubelet[3406]: I0117 00:30:26.787818 3406 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:30:26.787846 kubelet[3406]: I0117 00:30:26.787836 3406 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:30:26.793663 kubelet[3406]: I0117 00:30:26.793627 3406 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:30:26.797262 kubelet[3406]: I0117 00:30:26.797229 3406 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:30:26.798028 kubelet[3406]: I0117 00:30:26.797986 3406 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:30:26.798130 kubelet[3406]: I0117 00:30:26.798052 3406 server.go:1287] "Started kubelet" Jan 17 00:30:26.805878 kubelet[3406]: I0117 00:30:26.802031 3406 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:30:26.812913 kubelet[3406]: I0117 00:30:26.812845 3406 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:30:26.815164 kubelet[3406]: I0117 00:30:26.815090 3406 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:30:26.815775 kubelet[3406]: I0117 00:30:26.815735 3406 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:30:26.815942 kubelet[3406]: I0117 00:30:26.815159 3406 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:30:26.817402 kubelet[3406]: I0117 00:30:26.817373 3406 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:30:26.818202 kubelet[3406]: I0117 00:30:26.817546 3406 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:30:26.821076 kubelet[3406]: I0117 00:30:26.821058 3406 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:30:26.821247 kubelet[3406]: I0117 00:30:26.821234 3406 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:30:26.823238 kubelet[3406]: E0117 00:30:26.823212 3406 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:30:26.825661 kubelet[3406]: I0117 00:30:26.825598 3406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:30:26.830216 kubelet[3406]: I0117 00:30:26.829271 3406 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:30:26.830216 kubelet[3406]: I0117 00:30:26.829334 3406 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:30:26.830216 kubelet[3406]: I0117 00:30:26.829364 3406 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:30:26.830216 kubelet[3406]: I0117 00:30:26.829391 3406 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:30:26.830216 kubelet[3406]: E0117 00:30:26.829559 3406 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:30:26.835632 kubelet[3406]: I0117 00:30:26.835596 3406 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:30:26.836689 kubelet[3406]: I0117 00:30:26.835890 3406 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:30:26.836689 kubelet[3406]: I0117 00:30:26.836054 3406 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:30:26.901897 kubelet[3406]: I0117 00:30:26.901858 3406 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:30:26.901897 kubelet[3406]: I0117 00:30:26.901881 3406 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:30:26.901897 kubelet[3406]: I0117 00:30:26.901910 3406 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:30:26.902189 kubelet[3406]: I0117 00:30:26.902157 3406 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:30:26.902233 kubelet[3406]: I0117 00:30:26.902174 3406 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:30:26.902233 kubelet[3406]: I0117 00:30:26.902202 3406 policy_none.go:49] "None policy: Start" Jan 17 00:30:26.902233 kubelet[3406]: I0117 00:30:26.902217 3406 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:30:26.902233 kubelet[3406]: I0117 00:30:26.902231 3406 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:30:26.902408 kubelet[3406]: I0117 00:30:26.902384 3406 state_mem.go:75] "Updated machine memory state" Jan 17 00:30:26.904998 kubelet[3406]: I0117 00:30:26.903811 3406 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:30:26.904998 kubelet[3406]: I0117 00:30:26.904068 3406 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:30:26.904998 kubelet[3406]: I0117 00:30:26.904085 3406 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:30:26.905681 kubelet[3406]: I0117 00:30:26.905661 3406 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:30:26.909313 kubelet[3406]: E0117 00:30:26.909269 3406 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:30:26.930994 kubelet[3406]: I0117 00:30:26.930947 3406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:26.931319 kubelet[3406]: I0117 00:30:26.930989 3406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:26.931881 kubelet[3406]: I0117 00:30:26.931129 3406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:26.944553 kubelet[3406]: W0117 00:30:26.944391 3406 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:30:26.950492 kubelet[3406]: W0117 00:30:26.950319 3406 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:30:26.952026 kubelet[3406]: W0117 00:30:26.951649 3406 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:30:26.952026 kubelet[3406]: E0117 00:30:26.951829 3406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-2e1a0c4804\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.015261 kubelet[3406]: I0117 00:30:27.015221 3406 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.022870 kubelet[3406]: I0117 00:30:27.022705 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5ecf216dd73a77cdde841213e5cb4f6b-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-2e1a0c4804\" (UID: \"5ecf216dd73a77cdde841213e5cb4f6b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.022870 kubelet[3406]: I0117 00:30:27.022781 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5ecf216dd73a77cdde841213e5cb4f6b-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-2e1a0c4804\" (UID: \"5ecf216dd73a77cdde841213e5cb4f6b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.022870 kubelet[3406]: I0117 00:30:27.022812 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9c03585f4b8f2e1f264d9c5b7de15051-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-2e1a0c4804\" (UID: \"9c03585f4b8f2e1f264d9c5b7de15051\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.022870 kubelet[3406]: I0117 00:30:27.022845 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/040047ea7b4e3ae9c1aa409786f62c5b-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2e1a0c4804\" (UID: \"040047ea7b4e3ae9c1aa409786f62c5b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.022870 kubelet[3406]: I0117 00:30:27.022869 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/040047ea7b4e3ae9c1aa409786f62c5b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-2e1a0c4804\" (UID: \"040047ea7b4e3ae9c1aa409786f62c5b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.023303 kubelet[3406]: I0117 00:30:27.022891 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5ecf216dd73a77cdde841213e5cb4f6b-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-2e1a0c4804\" (UID: \"5ecf216dd73a77cdde841213e5cb4f6b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.023303 kubelet[3406]: I0117 00:30:27.022911 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/040047ea7b4e3ae9c1aa409786f62c5b-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-2e1a0c4804\" (UID: \"040047ea7b4e3ae9c1aa409786f62c5b\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.023303 kubelet[3406]: I0117 00:30:27.022942 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5ecf216dd73a77cdde841213e5cb4f6b-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2e1a0c4804\" (UID: \"5ecf216dd73a77cdde841213e5cb4f6b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.023303 kubelet[3406]: I0117 00:30:27.022966 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5ecf216dd73a77cdde841213e5cb4f6b-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-2e1a0c4804\" (UID: \"5ecf216dd73a77cdde841213e5cb4f6b\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.029349 kubelet[3406]: I0117 00:30:27.029294 3406 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.029526 kubelet[3406]: I0117 00:30:27.029427 3406 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.788936 kubelet[3406]: I0117 00:30:27.788877 3406 apiserver.go:52] "Watching apiserver" Jan 17 00:30:27.822443 kubelet[3406]: I0117 00:30:27.822331 3406 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:30:27.869177 kubelet[3406]: I0117 00:30:27.867874 3406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.869177 kubelet[3406]: I0117 00:30:27.868291 3406 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.888513 kubelet[3406]: W0117 00:30:27.888165 3406 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:30:27.888513 kubelet[3406]: E0117 00:30:27.888246 3406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-2e1a0c4804\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.888831 kubelet[3406]: W0117 00:30:27.888799 3406 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 00:30:27.888947 kubelet[3406]: E0117 00:30:27.888899 3406 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-2e1a0c4804\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" Jan 17 00:30:27.897242 kubelet[3406]: I0117 00:30:27.897030 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-2e1a0c4804" podStartSLOduration=2.897005557 podStartE2EDuration="2.897005557s" podCreationTimestamp="2026-01-17 00:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:30:27.896624647 +0000 UTC m=+1.185978666" watchObservedRunningTime="2026-01-17 00:30:27.897005557 +0000 UTC m=+1.186359676" Jan 17 00:30:27.926504 kubelet[3406]: I0117 00:30:27.925619 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-2e1a0c4804" podStartSLOduration=1.925587986 podStartE2EDuration="1.925587986s" podCreationTimestamp="2026-01-17 00:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:30:27.910868511 +0000 UTC m=+1.200222530" watchObservedRunningTime="2026-01-17 00:30:27.925587986 +0000 UTC m=+1.214942005" Jan 17 00:30:27.940434 kubelet[3406]: I0117 00:30:27.940355 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-2e1a0c4804" podStartSLOduration=1.940331063 podStartE2EDuration="1.940331063s" podCreationTimestamp="2026-01-17 00:30:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:30:27.927156927 +0000 UTC m=+1.216511046" watchObservedRunningTime="2026-01-17 00:30:27.940331063 +0000 UTC m=+1.229685082" Jan 17 00:30:32.359486 kubelet[3406]: I0117 00:30:32.359434 3406 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:30:32.361173 containerd[1828]: time="2026-01-17T00:30:32.360540863Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:30:32.361643 kubelet[3406]: I0117 00:30:32.360820 3406 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:30:33.063986 kubelet[3406]: I0117 00:30:33.063906 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5bf4faa9-f3da-4ca9-ba92-8cb1f285058d-xtables-lock\") pod \"kube-proxy-gcgng\" (UID: \"5bf4faa9-f3da-4ca9-ba92-8cb1f285058d\") " pod="kube-system/kube-proxy-gcgng" Jan 17 00:30:33.065955 kubelet[3406]: I0117 00:30:33.064185 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5bf4faa9-f3da-4ca9-ba92-8cb1f285058d-lib-modules\") pod \"kube-proxy-gcgng\" (UID: \"5bf4faa9-f3da-4ca9-ba92-8cb1f285058d\") " pod="kube-system/kube-proxy-gcgng" Jan 17 00:30:33.066332 kubelet[3406]: I0117 00:30:33.066117 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4frx2\" (UniqueName: \"kubernetes.io/projected/5bf4faa9-f3da-4ca9-ba92-8cb1f285058d-kube-api-access-4frx2\") pod \"kube-proxy-gcgng\" (UID: \"5bf4faa9-f3da-4ca9-ba92-8cb1f285058d\") " pod="kube-system/kube-proxy-gcgng" Jan 17 00:30:33.066332 kubelet[3406]: I0117 00:30:33.066258 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5bf4faa9-f3da-4ca9-ba92-8cb1f285058d-kube-proxy\") pod \"kube-proxy-gcgng\" (UID: \"5bf4faa9-f3da-4ca9-ba92-8cb1f285058d\") " pod="kube-system/kube-proxy-gcgng" Jan 17 00:30:33.321921 containerd[1828]: time="2026-01-17T00:30:33.321404383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gcgng,Uid:5bf4faa9-f3da-4ca9-ba92-8cb1f285058d,Namespace:kube-system,Attempt:0,}" Jan 17 00:30:33.367410 containerd[1828]: time="2026-01-17T00:30:33.367277554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:30:33.368615 containerd[1828]: time="2026-01-17T00:30:33.367350155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:30:33.368615 containerd[1828]: time="2026-01-17T00:30:33.367411057Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:33.368615 containerd[1828]: time="2026-01-17T00:30:33.368211777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:33.467647 containerd[1828]: time="2026-01-17T00:30:33.467570513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gcgng,Uid:5bf4faa9-f3da-4ca9-ba92-8cb1f285058d,Namespace:kube-system,Attempt:0,} returns sandbox id \"169c2238328ca0d48a2376c21e542504cb1b33e84ee1109702ba55c056080c57\"" Jan 17 00:30:33.480465 containerd[1828]: time="2026-01-17T00:30:33.480344539Z" level=info msg="CreateContainer within sandbox \"169c2238328ca0d48a2376c21e542504cb1b33e84ee1109702ba55c056080c57\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:30:33.533592 containerd[1828]: time="2026-01-17T00:30:33.533524435Z" level=info msg="CreateContainer within sandbox \"169c2238328ca0d48a2376c21e542504cb1b33e84ee1109702ba55c056080c57\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3af2bb47c7c132d6e7f8be0d5f72408573d4a5df818e6b931384859acaa547be\"" Jan 17 00:30:33.534602 containerd[1828]: time="2026-01-17T00:30:33.534558860Z" level=info msg="StartContainer for \"3af2bb47c7c132d6e7f8be0d5f72408573d4a5df818e6b931384859acaa547be\"" Jan 17 00:30:33.572281 kubelet[3406]: I0117 00:30:33.571929 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/adb79457-8789-4448-ba2b-dc42de1e2d00-var-lib-calico\") pod \"tigera-operator-7dcd859c48-79qfk\" (UID: \"adb79457-8789-4448-ba2b-dc42de1e2d00\") " pod="tigera-operator/tigera-operator-7dcd859c48-79qfk" Jan 17 00:30:33.572281 kubelet[3406]: I0117 00:30:33.572040 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4qfz\" (UniqueName: \"kubernetes.io/projected/adb79457-8789-4448-ba2b-dc42de1e2d00-kube-api-access-f4qfz\") pod \"tigera-operator-7dcd859c48-79qfk\" (UID: \"adb79457-8789-4448-ba2b-dc42de1e2d00\") " pod="tigera-operator/tigera-operator-7dcd859c48-79qfk" Jan 17 00:30:33.609267 containerd[1828]: time="2026-01-17T00:30:33.609171662Z" level=info msg="StartContainer for \"3af2bb47c7c132d6e7f8be0d5f72408573d4a5df818e6b931384859acaa547be\" returns successfully" Jan 17 00:30:33.791517 containerd[1828]: time="2026-01-17T00:30:33.791462265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-79qfk,Uid:adb79457-8789-4448-ba2b-dc42de1e2d00,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:30:33.843534 containerd[1828]: time="2026-01-17T00:30:33.842964708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:30:33.843534 containerd[1828]: time="2026-01-17T00:30:33.843088911Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:30:33.843534 containerd[1828]: time="2026-01-17T00:30:33.843106012Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:33.843534 containerd[1828]: time="2026-01-17T00:30:33.843221415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:33.948598 containerd[1828]: time="2026-01-17T00:30:33.948538258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-79qfk,Uid:adb79457-8789-4448-ba2b-dc42de1e2d00,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"84551d83fed231ae4891c4200f911d342cc9c78fd095bb62482f6af03c71c931\"" Jan 17 00:30:33.952478 containerd[1828]: time="2026-01-17T00:30:33.952427052Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:30:35.286634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3604739905.mount: Deactivated successfully. Jan 17 00:30:35.753929 kubelet[3406]: I0117 00:30:35.753851 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gcgng" podStartSLOduration=3.7538237580000002 podStartE2EDuration="3.753823758s" podCreationTimestamp="2026-01-17 00:30:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:30:33.907004455 +0000 UTC m=+7.196358474" watchObservedRunningTime="2026-01-17 00:30:35.753823758 +0000 UTC m=+9.043177777" Jan 17 00:30:38.325414 containerd[1828]: time="2026-01-17T00:30:38.325343963Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:38.328940 containerd[1828]: time="2026-01-17T00:30:38.328854148Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:30:38.332109 containerd[1828]: time="2026-01-17T00:30:38.332020824Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:38.337541 containerd[1828]: time="2026-01-17T00:30:38.337405454Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:38.338866 containerd[1828]: time="2026-01-17T00:30:38.338266475Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 4.385784622s" Jan 17 00:30:38.338866 containerd[1828]: time="2026-01-17T00:30:38.338321377Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:30:38.342769 containerd[1828]: time="2026-01-17T00:30:38.342720583Z" level=info msg="CreateContainer within sandbox \"84551d83fed231ae4891c4200f911d342cc9c78fd095bb62482f6af03c71c931\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:30:38.368378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount629664407.mount: Deactivated successfully. Jan 17 00:30:38.373665 containerd[1828]: time="2026-01-17T00:30:38.373613429Z" level=info msg="CreateContainer within sandbox \"84551d83fed231ae4891c4200f911d342cc9c78fd095bb62482f6af03c71c931\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"365a0edae98addb5b89686b3b0ae2bd1b3518688c24d2caa53bb45aa1707752b\"" Jan 17 00:30:38.374759 containerd[1828]: time="2026-01-17T00:30:38.374702355Z" level=info msg="StartContainer for \"365a0edae98addb5b89686b3b0ae2bd1b3518688c24d2caa53bb45aa1707752b\"" Jan 17 00:30:38.452520 containerd[1828]: time="2026-01-17T00:30:38.452451333Z" level=info msg="StartContainer for \"365a0edae98addb5b89686b3b0ae2bd1b3518688c24d2caa53bb45aa1707752b\" returns successfully" Jan 17 00:30:38.913512 kubelet[3406]: I0117 00:30:38.913421 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-79qfk" podStartSLOduration=1.5244187660000001 podStartE2EDuration="5.913396965s" podCreationTimestamp="2026-01-17 00:30:33 +0000 UTC" firstStartedPulling="2026-01-17 00:30:33.950462705 +0000 UTC m=+7.239816724" lastFinishedPulling="2026-01-17 00:30:38.339440904 +0000 UTC m=+11.628794923" observedRunningTime="2026-01-17 00:30:38.912660748 +0000 UTC m=+12.202014867" watchObservedRunningTime="2026-01-17 00:30:38.913396965 +0000 UTC m=+12.202751084" Jan 17 00:30:46.054532 sudo[2491]: pam_unix(sudo:session): session closed for user root Jan 17 00:30:46.161041 sshd[2487]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:46.168464 systemd-logind[1811]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:30:46.172434 systemd[1]: sshd@6-10.200.8.33:22-10.200.16.10:57322.service: Deactivated successfully. Jan 17 00:30:46.180350 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:30:46.187578 systemd-logind[1811]: Removed session 9. Jan 17 00:30:52.307526 kubelet[3406]: I0117 00:30:52.305686 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/231351da-404a-4071-bed0-fa98f4ef2e98-typha-certs\") pod \"calico-typha-847767b567-h5hql\" (UID: \"231351da-404a-4071-bed0-fa98f4ef2e98\") " pod="calico-system/calico-typha-847767b567-h5hql" Jan 17 00:30:52.307526 kubelet[3406]: I0117 00:30:52.305771 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmtw9\" (UniqueName: \"kubernetes.io/projected/231351da-404a-4071-bed0-fa98f4ef2e98-kube-api-access-wmtw9\") pod \"calico-typha-847767b567-h5hql\" (UID: \"231351da-404a-4071-bed0-fa98f4ef2e98\") " pod="calico-system/calico-typha-847767b567-h5hql" Jan 17 00:30:52.307526 kubelet[3406]: I0117 00:30:52.305806 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/231351da-404a-4071-bed0-fa98f4ef2e98-tigera-ca-bundle\") pod \"calico-typha-847767b567-h5hql\" (UID: \"231351da-404a-4071-bed0-fa98f4ef2e98\") " pod="calico-system/calico-typha-847767b567-h5hql" Jan 17 00:30:52.406708 kubelet[3406]: I0117 00:30:52.406641 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f6699916-f1a7-47ed-be30-b198823e5542-cni-log-dir\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.406708 kubelet[3406]: I0117 00:30:52.406704 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f6699916-f1a7-47ed-be30-b198823e5542-node-certs\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.407019 kubelet[3406]: I0117 00:30:52.406770 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f6699916-f1a7-47ed-be30-b198823e5542-cni-bin-dir\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.407019 kubelet[3406]: I0117 00:30:52.406809 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dxdn\" (UniqueName: \"kubernetes.io/projected/f6699916-f1a7-47ed-be30-b198823e5542-kube-api-access-5dxdn\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.407019 kubelet[3406]: I0117 00:30:52.406834 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f6699916-f1a7-47ed-be30-b198823e5542-var-run-calico\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.407019 kubelet[3406]: I0117 00:30:52.406859 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f6699916-f1a7-47ed-be30-b198823e5542-policysync\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.407019 kubelet[3406]: I0117 00:30:52.406884 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6699916-f1a7-47ed-be30-b198823e5542-tigera-ca-bundle\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.407201 kubelet[3406]: I0117 00:30:52.406907 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6699916-f1a7-47ed-be30-b198823e5542-xtables-lock\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.407201 kubelet[3406]: I0117 00:30:52.406931 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6699916-f1a7-47ed-be30-b198823e5542-lib-modules\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.407201 kubelet[3406]: I0117 00:30:52.406952 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f6699916-f1a7-47ed-be30-b198823e5542-var-lib-calico\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.407201 kubelet[3406]: I0117 00:30:52.406976 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f6699916-f1a7-47ed-be30-b198823e5542-cni-net-dir\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.407201 kubelet[3406]: I0117 00:30:52.407002 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f6699916-f1a7-47ed-be30-b198823e5542-flexvol-driver-host\") pod \"calico-node-wj5t5\" (UID: \"f6699916-f1a7-47ed-be30-b198823e5542\") " pod="calico-system/calico-node-wj5t5" Jan 17 00:30:52.499860 kubelet[3406]: E0117 00:30:52.497197 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:30:52.514485 kubelet[3406]: E0117 00:30:52.514412 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.516880 kubelet[3406]: W0117 00:30:52.515237 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.516880 kubelet[3406]: E0117 00:30:52.515304 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.517248 kubelet[3406]: E0117 00:30:52.517130 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.517335 kubelet[3406]: W0117 00:30:52.517250 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.517335 kubelet[3406]: E0117 00:30:52.517279 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.527652 kubelet[3406]: E0117 00:30:52.523413 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.527652 kubelet[3406]: W0117 00:30:52.523442 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.527652 kubelet[3406]: E0117 00:30:52.523501 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.527652 kubelet[3406]: E0117 00:30:52.527565 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.527652 kubelet[3406]: W0117 00:30:52.527595 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.527652 kubelet[3406]: E0117 00:30:52.527644 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.528608 containerd[1828]: time="2026-01-17T00:30:52.526134781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847767b567-h5hql,Uid:231351da-404a-4071-bed0-fa98f4ef2e98,Namespace:calico-system,Attempt:0,}" Jan 17 00:30:52.529049 kubelet[3406]: E0117 00:30:52.528093 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.529049 kubelet[3406]: W0117 00:30:52.528125 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.529049 kubelet[3406]: E0117 00:30:52.528147 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.574850 kubelet[3406]: E0117 00:30:52.572872 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.574850 kubelet[3406]: W0117 00:30:52.572925 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.574850 kubelet[3406]: E0117 00:30:52.572964 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.588229 kubelet[3406]: E0117 00:30:52.584015 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.588229 kubelet[3406]: W0117 00:30:52.584179 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.588229 kubelet[3406]: E0117 00:30:52.584625 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.588229 kubelet[3406]: E0117 00:30:52.586763 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.588229 kubelet[3406]: W0117 00:30:52.586788 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.588229 kubelet[3406]: E0117 00:30:52.586819 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.588229 kubelet[3406]: E0117 00:30:52.587788 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.588229 kubelet[3406]: W0117 00:30:52.587804 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.588229 kubelet[3406]: E0117 00:30:52.587924 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.591625 kubelet[3406]: E0117 00:30:52.588775 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.591625 kubelet[3406]: W0117 00:30:52.588790 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.591625 kubelet[3406]: E0117 00:30:52.588808 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.591625 kubelet[3406]: E0117 00:30:52.591357 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.591625 kubelet[3406]: W0117 00:30:52.591373 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.591625 kubelet[3406]: E0117 00:30:52.591490 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.594540 kubelet[3406]: E0117 00:30:52.592122 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.594540 kubelet[3406]: W0117 00:30:52.592138 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.594540 kubelet[3406]: E0117 00:30:52.592153 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.594540 kubelet[3406]: E0117 00:30:52.592806 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.594540 kubelet[3406]: W0117 00:30:52.592819 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.594540 kubelet[3406]: E0117 00:30:52.592929 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.594540 kubelet[3406]: E0117 00:30:52.593474 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.594540 kubelet[3406]: W0117 00:30:52.593487 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.594540 kubelet[3406]: E0117 00:30:52.593504 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.595308 kubelet[3406]: E0117 00:30:52.594757 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.595308 kubelet[3406]: W0117 00:30:52.594771 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.595308 kubelet[3406]: E0117 00:30:52.594787 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.595439 kubelet[3406]: E0117 00:30:52.595331 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.595439 kubelet[3406]: W0117 00:30:52.595343 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.595439 kubelet[3406]: E0117 00:30:52.595358 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.598768 kubelet[3406]: E0117 00:30:52.595832 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.598768 kubelet[3406]: W0117 00:30:52.595848 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.598768 kubelet[3406]: E0117 00:30:52.595862 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.598768 kubelet[3406]: E0117 00:30:52.596347 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.598768 kubelet[3406]: W0117 00:30:52.596359 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.598768 kubelet[3406]: E0117 00:30:52.596376 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.598768 kubelet[3406]: E0117 00:30:52.597041 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.598768 kubelet[3406]: W0117 00:30:52.597054 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.598768 kubelet[3406]: E0117 00:30:52.597068 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.598768 kubelet[3406]: E0117 00:30:52.597639 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.599238 kubelet[3406]: W0117 00:30:52.597651 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.599238 kubelet[3406]: E0117 00:30:52.597666 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.599238 kubelet[3406]: E0117 00:30:52.598242 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.599238 kubelet[3406]: W0117 00:30:52.598256 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.599238 kubelet[3406]: E0117 00:30:52.598270 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.599238 kubelet[3406]: E0117 00:30:52.598725 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.599238 kubelet[3406]: W0117 00:30:52.598747 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.599238 kubelet[3406]: E0117 00:30:52.598762 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.599557 kubelet[3406]: E0117 00:30:52.599275 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.599557 kubelet[3406]: W0117 00:30:52.599287 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.599557 kubelet[3406]: E0117 00:30:52.599301 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.603765 kubelet[3406]: E0117 00:30:52.599799 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.603765 kubelet[3406]: W0117 00:30:52.599814 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.603765 kubelet[3406]: E0117 00:30:52.599827 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.603765 kubelet[3406]: E0117 00:30:52.600293 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.603765 kubelet[3406]: W0117 00:30:52.600305 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.603765 kubelet[3406]: E0117 00:30:52.600319 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.603765 kubelet[3406]: E0117 00:30:52.600819 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.603765 kubelet[3406]: W0117 00:30:52.600831 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.603765 kubelet[3406]: E0117 00:30:52.600845 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.610481 kubelet[3406]: E0117 00:30:52.610424 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.616222 kubelet[3406]: W0117 00:30:52.610564 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.619483 kubelet[3406]: E0117 00:30:52.617836 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.619483 kubelet[3406]: I0117 00:30:52.617930 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a7052c5c-a862-4e62-a623-7782ea46a871-registration-dir\") pod \"csi-node-driver-bnm26\" (UID: \"a7052c5c-a862-4e62-a623-7782ea46a871\") " pod="calico-system/csi-node-driver-bnm26" Jan 17 00:30:52.622842 kubelet[3406]: E0117 00:30:52.622470 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.623628 kubelet[3406]: W0117 00:30:52.623513 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.624141 kubelet[3406]: E0117 00:30:52.624034 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.624749 kubelet[3406]: I0117 00:30:52.624550 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7052c5c-a862-4e62-a623-7782ea46a871-kubelet-dir\") pod \"csi-node-driver-bnm26\" (UID: \"a7052c5c-a862-4e62-a623-7782ea46a871\") " pod="calico-system/csi-node-driver-bnm26" Jan 17 00:30:52.627484 containerd[1828]: time="2026-01-17T00:30:52.626575071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wj5t5,Uid:f6699916-f1a7-47ed-be30-b198823e5542,Namespace:calico-system,Attempt:0,}" Jan 17 00:30:52.628761 kubelet[3406]: E0117 00:30:52.628070 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.628761 kubelet[3406]: W0117 00:30:52.628453 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.630768 kubelet[3406]: E0117 00:30:52.629577 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.630768 kubelet[3406]: W0117 00:30:52.629601 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.630768 kubelet[3406]: E0117 00:30:52.629653 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.630768 kubelet[3406]: I0117 00:30:52.629736 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a7052c5c-a862-4e62-a623-7782ea46a871-varrun\") pod \"csi-node-driver-bnm26\" (UID: \"a7052c5c-a862-4e62-a623-7782ea46a871\") " pod="calico-system/csi-node-driver-bnm26" Jan 17 00:30:52.630768 kubelet[3406]: E0117 00:30:52.630020 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.630768 kubelet[3406]: E0117 00:30:52.630238 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.630768 kubelet[3406]: W0117 00:30:52.630252 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.630768 kubelet[3406]: E0117 00:30:52.630536 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.630768 kubelet[3406]: W0117 00:30:52.630549 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.631216 kubelet[3406]: E0117 00:30:52.630581 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.631216 kubelet[3406]: E0117 00:30:52.630657 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.631216 kubelet[3406]: I0117 00:30:52.630685 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a7052c5c-a862-4e62-a623-7782ea46a871-socket-dir\") pod \"csi-node-driver-bnm26\" (UID: \"a7052c5c-a862-4e62-a623-7782ea46a871\") " pod="calico-system/csi-node-driver-bnm26" Jan 17 00:30:52.631216 kubelet[3406]: E0117 00:30:52.631068 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.631216 kubelet[3406]: W0117 00:30:52.631086 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.631216 kubelet[3406]: E0117 00:30:52.631141 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.631477 kubelet[3406]: E0117 00:30:52.631424 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.631477 kubelet[3406]: W0117 00:30:52.631435 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.631573 kubelet[3406]: E0117 00:30:52.631546 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.633605 kubelet[3406]: E0117 00:30:52.631840 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.633605 kubelet[3406]: W0117 00:30:52.631858 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.633605 kubelet[3406]: E0117 00:30:52.631881 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.633605 kubelet[3406]: E0117 00:30:52.632901 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.633605 kubelet[3406]: W0117 00:30:52.632919 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.633605 kubelet[3406]: E0117 00:30:52.633437 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.633928 kubelet[3406]: I0117 00:30:52.633702 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nq5q\" (UniqueName: \"kubernetes.io/projected/a7052c5c-a862-4e62-a623-7782ea46a871-kube-api-access-8nq5q\") pod \"csi-node-driver-bnm26\" (UID: \"a7052c5c-a862-4e62-a623-7782ea46a871\") " pod="calico-system/csi-node-driver-bnm26" Jan 17 00:30:52.635718 kubelet[3406]: E0117 00:30:52.635443 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.635718 kubelet[3406]: W0117 00:30:52.635465 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.635718 kubelet[3406]: E0117 00:30:52.635501 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.636772 kubelet[3406]: E0117 00:30:52.636465 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.636772 kubelet[3406]: W0117 00:30:52.636482 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.636903 kubelet[3406]: E0117 00:30:52.636777 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.639613 kubelet[3406]: E0117 00:30:52.637363 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.639613 kubelet[3406]: W0117 00:30:52.637380 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.639613 kubelet[3406]: E0117 00:30:52.637397 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.639613 kubelet[3406]: E0117 00:30:52.638686 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.639613 kubelet[3406]: W0117 00:30:52.638700 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.639613 kubelet[3406]: E0117 00:30:52.638737 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.639613 kubelet[3406]: E0117 00:30:52.639161 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.639613 kubelet[3406]: W0117 00:30:52.639175 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.639613 kubelet[3406]: E0117 00:30:52.639189 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.668873 containerd[1828]: time="2026-01-17T00:30:52.668536312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:30:52.668873 containerd[1828]: time="2026-01-17T00:30:52.668627714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:30:52.668873 containerd[1828]: time="2026-01-17T00:30:52.668690715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:52.673214 containerd[1828]: time="2026-01-17T00:30:52.670026549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:52.727956 containerd[1828]: time="2026-01-17T00:30:52.726549150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:30:52.727956 containerd[1828]: time="2026-01-17T00:30:52.726673853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:30:52.727956 containerd[1828]: time="2026-01-17T00:30:52.726701154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:52.727956 containerd[1828]: time="2026-01-17T00:30:52.726858358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:30:52.742849 kubelet[3406]: E0117 00:30:52.742757 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.742849 kubelet[3406]: W0117 00:30:52.742797 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.742849 kubelet[3406]: E0117 00:30:52.742847 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.744936 kubelet[3406]: E0117 00:30:52.744899 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.744936 kubelet[3406]: W0117 00:30:52.744935 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.744936 kubelet[3406]: E0117 00:30:52.744992 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.746034 kubelet[3406]: E0117 00:30:52.745565 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.746034 kubelet[3406]: W0117 00:30:52.745583 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.746034 kubelet[3406]: E0117 00:30:52.746002 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.747188 kubelet[3406]: E0117 00:30:52.747165 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.747188 kubelet[3406]: W0117 00:30:52.747184 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.747341 kubelet[3406]: E0117 00:30:52.747204 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.749041 kubelet[3406]: E0117 00:30:52.748890 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.749041 kubelet[3406]: W0117 00:30:52.748907 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.749166 kubelet[3406]: E0117 00:30:52.749106 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.749386 kubelet[3406]: E0117 00:30:52.749296 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.749386 kubelet[3406]: W0117 00:30:52.749310 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.749386 kubelet[3406]: E0117 00:30:52.749355 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.750819 kubelet[3406]: E0117 00:30:52.750793 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.750819 kubelet[3406]: W0117 00:30:52.750819 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.751216 kubelet[3406]: E0117 00:30:52.751056 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.751216 kubelet[3406]: W0117 00:30:52.751071 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.752820 kubelet[3406]: E0117 00:30:52.752703 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.752820 kubelet[3406]: E0117 00:30:52.752717 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.753014 kubelet[3406]: E0117 00:30:52.752997 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.753064 kubelet[3406]: W0117 00:30:52.753015 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.753147 kubelet[3406]: E0117 00:30:52.753127 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.753390 kubelet[3406]: E0117 00:30:52.753369 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.753390 kubelet[3406]: W0117 00:30:52.753385 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.755762 kubelet[3406]: E0117 00:30:52.754229 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.755762 kubelet[3406]: E0117 00:30:52.754425 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.755762 kubelet[3406]: W0117 00:30:52.754451 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.755762 kubelet[3406]: E0117 00:30:52.754537 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.755762 kubelet[3406]: E0117 00:30:52.755031 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.755762 kubelet[3406]: W0117 00:30:52.755043 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.755762 kubelet[3406]: E0117 00:30:52.755361 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.755762 kubelet[3406]: E0117 00:30:52.755555 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.755762 kubelet[3406]: W0117 00:30:52.755564 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.757699 kubelet[3406]: E0117 00:30:52.755783 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.757699 kubelet[3406]: E0117 00:30:52.756061 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.757699 kubelet[3406]: W0117 00:30:52.756073 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.757699 kubelet[3406]: E0117 00:30:52.756580 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.760688 kubelet[3406]: E0117 00:30:52.759022 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.760688 kubelet[3406]: W0117 00:30:52.759041 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.760688 kubelet[3406]: E0117 00:30:52.759267 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.760688 kubelet[3406]: W0117 00:30:52.759278 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.760688 kubelet[3406]: E0117 00:30:52.759478 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.760688 kubelet[3406]: W0117 00:30:52.759489 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.760688 kubelet[3406]: E0117 00:30:52.759672 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.760688 kubelet[3406]: W0117 00:30:52.759683 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.760688 kubelet[3406]: E0117 00:30:52.759703 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.760688 kubelet[3406]: E0117 00:30:52.759786 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.761667 kubelet[3406]: E0117 00:30:52.759806 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.761667 kubelet[3406]: E0117 00:30:52.760163 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.761667 kubelet[3406]: W0117 00:30:52.760181 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.761667 kubelet[3406]: E0117 00:30:52.760202 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.761667 kubelet[3406]: E0117 00:30:52.760546 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.761667 kubelet[3406]: W0117 00:30:52.760561 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.761667 kubelet[3406]: E0117 00:30:52.760580 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.761667 kubelet[3406]: E0117 00:30:52.759828 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.766791 kubelet[3406]: E0117 00:30:52.764775 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.766791 kubelet[3406]: W0117 00:30:52.764798 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.766791 kubelet[3406]: E0117 00:30:52.764831 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.766791 kubelet[3406]: E0117 00:30:52.765173 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.766791 kubelet[3406]: W0117 00:30:52.765185 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.766791 kubelet[3406]: E0117 00:30:52.765270 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.766791 kubelet[3406]: E0117 00:30:52.765601 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.766791 kubelet[3406]: W0117 00:30:52.765626 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.766791 kubelet[3406]: E0117 00:30:52.765929 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.766791 kubelet[3406]: W0117 00:30:52.765945 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.767258 kubelet[3406]: E0117 00:30:52.765960 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.767258 kubelet[3406]: E0117 00:30:52.766825 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.771337 kubelet[3406]: E0117 00:30:52.770473 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.771337 kubelet[3406]: W0117 00:30:52.770492 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.771337 kubelet[3406]: E0117 00:30:52.770514 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.780226 kubelet[3406]: E0117 00:30:52.780120 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:52.780226 kubelet[3406]: W0117 00:30:52.780145 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:52.780226 kubelet[3406]: E0117 00:30:52.780173 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:52.809861 containerd[1828]: time="2026-01-17T00:30:52.809639810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-847767b567-h5hql,Uid:231351da-404a-4071-bed0-fa98f4ef2e98,Namespace:calico-system,Attempt:0,} returns sandbox id \"4a14dc689dd052b495672a527aca511b17ee1ded412125c34796218b0a433e78\"" Jan 17 00:30:52.811968 containerd[1828]: time="2026-01-17T00:30:52.811934967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wj5t5,Uid:f6699916-f1a7-47ed-be30-b198823e5542,Namespace:calico-system,Attempt:0,} returns sandbox id \"f008dcfe99e47278b30828e99849c16f20b2e89dd4edd3a1e440d315c178daef\"" Jan 17 00:30:52.816577 containerd[1828]: time="2026-01-17T00:30:52.816541781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:30:54.158302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1820989359.mount: Deactivated successfully. Jan 17 00:30:54.831584 kubelet[3406]: E0117 00:30:54.831533 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:30:55.375188 containerd[1828]: time="2026-01-17T00:30:55.375120020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:55.377797 containerd[1828]: time="2026-01-17T00:30:55.377713184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:30:55.383181 containerd[1828]: time="2026-01-17T00:30:55.383076417Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:55.387212 containerd[1828]: time="2026-01-17T00:30:55.387121117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:55.388259 containerd[1828]: time="2026-01-17T00:30:55.387987439Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.571065448s" Jan 17 00:30:55.388259 containerd[1828]: time="2026-01-17T00:30:55.388037940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:30:55.392291 containerd[1828]: time="2026-01-17T00:30:55.392232744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:30:55.411900 containerd[1828]: time="2026-01-17T00:30:55.411774628Z" level=info msg="CreateContainer within sandbox \"4a14dc689dd052b495672a527aca511b17ee1ded412125c34796218b0a433e78\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:30:55.452283 containerd[1828]: time="2026-01-17T00:30:55.452215531Z" level=info msg="CreateContainer within sandbox \"4a14dc689dd052b495672a527aca511b17ee1ded412125c34796218b0a433e78\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2f195fc0910e488d240dfb8915a4d5a3f6ea3e793d4e2119d1fe26b40e9b5589\"" Jan 17 00:30:55.453995 containerd[1828]: time="2026-01-17T00:30:55.453948274Z" level=info msg="StartContainer for \"2f195fc0910e488d240dfb8915a4d5a3f6ea3e793d4e2119d1fe26b40e9b5589\"" Jan 17 00:30:55.547119 containerd[1828]: time="2026-01-17T00:30:55.547053683Z" level=info msg="StartContainer for \"2f195fc0910e488d240dfb8915a4d5a3f6ea3e793d4e2119d1fe26b40e9b5589\" returns successfully" Jan 17 00:30:55.972590 kubelet[3406]: I0117 00:30:55.971880 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-847767b567-h5hql" podStartSLOduration=1.397752792 podStartE2EDuration="3.971848515s" podCreationTimestamp="2026-01-17 00:30:52 +0000 UTC" firstStartedPulling="2026-01-17 00:30:52.81528055 +0000 UTC m=+26.104634569" lastFinishedPulling="2026-01-17 00:30:55.389376273 +0000 UTC m=+28.678730292" observedRunningTime="2026-01-17 00:30:55.971269901 +0000 UTC m=+29.260624020" watchObservedRunningTime="2026-01-17 00:30:55.971848515 +0000 UTC m=+29.261202534" Jan 17 00:30:56.031099 kubelet[3406]: E0117 00:30:56.031047 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.031099 kubelet[3406]: W0117 00:30:56.031087 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.031484 kubelet[3406]: E0117 00:30:56.031123 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.031484 kubelet[3406]: E0117 00:30:56.031416 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.031484 kubelet[3406]: W0117 00:30:56.031431 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.031484 kubelet[3406]: E0117 00:30:56.031447 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.031720 kubelet[3406]: E0117 00:30:56.031677 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.031720 kubelet[3406]: W0117 00:30:56.031689 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.031720 kubelet[3406]: E0117 00:30:56.031704 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.031964 kubelet[3406]: E0117 00:30:56.031952 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.032005 kubelet[3406]: W0117 00:30:56.031965 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.032005 kubelet[3406]: E0117 00:30:56.031979 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.032220 kubelet[3406]: E0117 00:30:56.032204 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.032301 kubelet[3406]: W0117 00:30:56.032221 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.032301 kubelet[3406]: E0117 00:30:56.032235 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.032443 kubelet[3406]: E0117 00:30:56.032435 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.032443 kubelet[3406]: W0117 00:30:56.032447 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.032585 kubelet[3406]: E0117 00:30:56.032460 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.032680 kubelet[3406]: E0117 00:30:56.032662 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.032680 kubelet[3406]: W0117 00:30:56.032677 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.032849 kubelet[3406]: E0117 00:30:56.032691 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.032923 kubelet[3406]: E0117 00:30:56.032904 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.032923 kubelet[3406]: W0117 00:30:56.032917 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.033048 kubelet[3406]: E0117 00:30:56.032930 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.033153 kubelet[3406]: E0117 00:30:56.033138 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.033153 kubelet[3406]: W0117 00:30:56.033151 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.033304 kubelet[3406]: E0117 00:30:56.033164 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.033373 kubelet[3406]: E0117 00:30:56.033353 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.033373 kubelet[3406]: W0117 00:30:56.033363 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.033507 kubelet[3406]: E0117 00:30:56.033376 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.033593 kubelet[3406]: E0117 00:30:56.033557 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.033593 kubelet[3406]: W0117 00:30:56.033568 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.033593 kubelet[3406]: E0117 00:30:56.033580 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.033843 kubelet[3406]: E0117 00:30:56.033800 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.033843 kubelet[3406]: W0117 00:30:56.033812 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.033843 kubelet[3406]: E0117 00:30:56.033824 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.034067 kubelet[3406]: E0117 00:30:56.034038 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.034067 kubelet[3406]: W0117 00:30:56.034048 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.034067 kubelet[3406]: E0117 00:30:56.034060 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.034284 kubelet[3406]: E0117 00:30:56.034260 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.034284 kubelet[3406]: W0117 00:30:56.034273 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.034434 kubelet[3406]: E0117 00:30:56.034286 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.034496 kubelet[3406]: E0117 00:30:56.034485 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.034496 kubelet[3406]: W0117 00:30:56.034499 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.034600 kubelet[3406]: E0117 00:30:56.034511 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.084401 kubelet[3406]: E0117 00:30:56.084350 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.084401 kubelet[3406]: W0117 00:30:56.084392 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.084891 kubelet[3406]: E0117 00:30:56.084431 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.084891 kubelet[3406]: E0117 00:30:56.084849 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.084891 kubelet[3406]: W0117 00:30:56.084869 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.085184 kubelet[3406]: E0117 00:30:56.084894 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.085184 kubelet[3406]: E0117 00:30:56.085175 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.085307 kubelet[3406]: W0117 00:30:56.085188 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.085307 kubelet[3406]: E0117 00:30:56.085210 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.085582 kubelet[3406]: E0117 00:30:56.085548 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.085582 kubelet[3406]: W0117 00:30:56.085567 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.085792 kubelet[3406]: E0117 00:30:56.085594 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.085940 kubelet[3406]: E0117 00:30:56.085921 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.085940 kubelet[3406]: W0117 00:30:56.085938 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.086077 kubelet[3406]: E0117 00:30:56.086057 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.086238 kubelet[3406]: E0117 00:30:56.086222 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.086238 kubelet[3406]: W0117 00:30:56.086236 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.086427 kubelet[3406]: E0117 00:30:56.086408 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.086573 kubelet[3406]: E0117 00:30:56.086557 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.086573 kubelet[3406]: W0117 00:30:56.086570 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.086737 kubelet[3406]: E0117 00:30:56.086669 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.086868 kubelet[3406]: E0117 00:30:56.086851 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.086868 kubelet[3406]: W0117 00:30:56.086866 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.087023 kubelet[3406]: E0117 00:30:56.086889 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.087154 kubelet[3406]: E0117 00:30:56.087140 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.087154 kubelet[3406]: W0117 00:30:56.087153 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.087276 kubelet[3406]: E0117 00:30:56.087171 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.087652 kubelet[3406]: E0117 00:30:56.087632 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.087652 kubelet[3406]: W0117 00:30:56.087647 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.087835 kubelet[3406]: E0117 00:30:56.087671 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.087952 kubelet[3406]: E0117 00:30:56.087936 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.087952 kubelet[3406]: W0117 00:30:56.087950 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.088081 kubelet[3406]: E0117 00:30:56.088068 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.088340 kubelet[3406]: E0117 00:30:56.088226 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.088340 kubelet[3406]: W0117 00:30:56.088240 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.088340 kubelet[3406]: E0117 00:30:56.088265 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.088526 kubelet[3406]: E0117 00:30:56.088514 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.088598 kubelet[3406]: W0117 00:30:56.088584 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.088675 kubelet[3406]: E0117 00:30:56.088662 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.088971 kubelet[3406]: E0117 00:30:56.088933 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.088971 kubelet[3406]: W0117 00:30:56.088947 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.088971 kubelet[3406]: E0117 00:30:56.088966 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.089420 kubelet[3406]: E0117 00:30:56.089404 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.089420 kubelet[3406]: W0117 00:30:56.089417 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.089552 kubelet[3406]: E0117 00:30:56.089532 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.089827 kubelet[3406]: E0117 00:30:56.089808 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.089827 kubelet[3406]: W0117 00:30:56.089823 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.089926 kubelet[3406]: E0117 00:30:56.089838 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.090100 kubelet[3406]: E0117 00:30:56.090083 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.090100 kubelet[3406]: W0117 00:30:56.090097 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.090220 kubelet[3406]: E0117 00:30:56.090112 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.090713 kubelet[3406]: E0117 00:30:56.090696 3406 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:30:56.090713 kubelet[3406]: W0117 00:30:56.090710 3406 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:30:56.090834 kubelet[3406]: E0117 00:30:56.090725 3406 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:30:56.607941 containerd[1828]: time="2026-01-17T00:30:56.607873885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:56.610167 containerd[1828]: time="2026-01-17T00:30:56.609769032Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:30:56.615863 containerd[1828]: time="2026-01-17T00:30:56.614844158Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:56.620977 containerd[1828]: time="2026-01-17T00:30:56.620923608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:30:56.621964 containerd[1828]: time="2026-01-17T00:30:56.621919733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.229623488s" Jan 17 00:30:56.622127 containerd[1828]: time="2026-01-17T00:30:56.622101538Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:30:56.626546 containerd[1828]: time="2026-01-17T00:30:56.626505947Z" level=info msg="CreateContainer within sandbox \"f008dcfe99e47278b30828e99849c16f20b2e89dd4edd3a1e440d315c178daef\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:30:56.663965 containerd[1828]: time="2026-01-17T00:30:56.663902574Z" level=info msg="CreateContainer within sandbox \"f008dcfe99e47278b30828e99849c16f20b2e89dd4edd3a1e440d315c178daef\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2ba03dcc041e2fd00d0a54d36665c39ca343ed7b871da262f95c4fb5c03ab99b\"" Jan 17 00:30:56.664968 containerd[1828]: time="2026-01-17T00:30:56.664927899Z" level=info msg="StartContainer for \"2ba03dcc041e2fd00d0a54d36665c39ca343ed7b871da262f95c4fb5c03ab99b\"" Jan 17 00:30:56.751419 containerd[1828]: time="2026-01-17T00:30:56.751324942Z" level=info msg="StartContainer for \"2ba03dcc041e2fd00d0a54d36665c39ca343ed7b871da262f95c4fb5c03ab99b\" returns successfully" Jan 17 00:30:56.803468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ba03dcc041e2fd00d0a54d36665c39ca343ed7b871da262f95c4fb5c03ab99b-rootfs.mount: Deactivated successfully. Jan 17 00:30:56.833081 kubelet[3406]: E0117 00:30:56.832040 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:30:56.961184 kubelet[3406]: I0117 00:30:56.960735 3406 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:30:57.003516 containerd[1828]: time="2026-01-17T00:30:57.003461893Z" level=error msg="collecting metrics for 2ba03dcc041e2fd00d0a54d36665c39ca343ed7b871da262f95c4fb5c03ab99b" error="cgroups: cgroup deleted: unknown" Jan 17 00:30:58.292939 containerd[1828]: time="2026-01-17T00:30:58.292830913Z" level=info msg="shim disconnected" id=2ba03dcc041e2fd00d0a54d36665c39ca343ed7b871da262f95c4fb5c03ab99b namespace=k8s.io Jan 17 00:30:58.293779 containerd[1828]: time="2026-01-17T00:30:58.292976917Z" level=warning msg="cleaning up after shim disconnected" id=2ba03dcc041e2fd00d0a54d36665c39ca343ed7b871da262f95c4fb5c03ab99b namespace=k8s.io Jan 17 00:30:58.293779 containerd[1828]: time="2026-01-17T00:30:58.292998217Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:30:58.830855 kubelet[3406]: E0117 00:30:58.830239 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:30:58.969262 containerd[1828]: time="2026-01-17T00:30:58.968518748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:31:00.830967 kubelet[3406]: E0117 00:31:00.830398 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:31:02.225450 containerd[1828]: time="2026-01-17T00:31:02.225380871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:02.228813 containerd[1828]: time="2026-01-17T00:31:02.228719449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:31:02.233863 containerd[1828]: time="2026-01-17T00:31:02.232711543Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:02.240411 containerd[1828]: time="2026-01-17T00:31:02.240360822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:02.241099 containerd[1828]: time="2026-01-17T00:31:02.241040738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.272470789s" Jan 17 00:31:02.241099 containerd[1828]: time="2026-01-17T00:31:02.241089239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:31:02.245109 containerd[1828]: time="2026-01-17T00:31:02.244542420Z" level=info msg="CreateContainer within sandbox \"f008dcfe99e47278b30828e99849c16f20b2e89dd4edd3a1e440d315c178daef\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:31:02.280896 containerd[1828]: time="2026-01-17T00:31:02.280829370Z" level=info msg="CreateContainer within sandbox \"f008dcfe99e47278b30828e99849c16f20b2e89dd4edd3a1e440d315c178daef\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"80e6acd0c0e6092ca66d744b0efed96117cc488ff4255b61246739369744e2c4\"" Jan 17 00:31:02.281682 containerd[1828]: time="2026-01-17T00:31:02.281571088Z" level=info msg="StartContainer for \"80e6acd0c0e6092ca66d744b0efed96117cc488ff4255b61246739369744e2c4\"" Jan 17 00:31:02.397844 containerd[1828]: time="2026-01-17T00:31:02.397489104Z" level=info msg="StartContainer for \"80e6acd0c0e6092ca66d744b0efed96117cc488ff4255b61246739369744e2c4\" returns successfully" Jan 17 00:31:02.830550 kubelet[3406]: E0117 00:31:02.829943 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:31:04.110537 containerd[1828]: time="2026-01-17T00:31:04.110449147Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jan 17 00:31:04.123484 kubelet[3406]: I0117 00:31:04.122215 3406 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:31:04.160391 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-80e6acd0c0e6092ca66d744b0efed96117cc488ff4255b61246739369744e2c4-rootfs.mount: Deactivated successfully. Jan 17 00:31:04.354580 kubelet[3406]: I0117 00:31:04.354509 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3b534c16-0d44-4e13-804d-f2f891a56a96-config-volume\") pod \"coredns-668d6bf9bc-dq7hz\" (UID: \"3b534c16-0d44-4e13-804d-f2f891a56a96\") " pod="kube-system/coredns-668d6bf9bc-dq7hz" Jan 17 00:31:04.355299 kubelet[3406]: I0117 00:31:04.354928 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/086626e6-23d7-433b-8fe2-380f0110d591-goldmane-ca-bundle\") pod \"goldmane-666569f655-jt8r9\" (UID: \"086626e6-23d7-433b-8fe2-380f0110d591\") " pod="calico-system/goldmane-666569f655-jt8r9" Jan 17 00:31:04.355299 kubelet[3406]: I0117 00:31:04.354996 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrrhm\" (UniqueName: \"kubernetes.io/projected/f248d2c0-f221-4bde-8ea2-75ac2344f18d-kube-api-access-wrrhm\") pod \"calico-kube-controllers-7fddb47c6b-xwhmv\" (UID: \"f248d2c0-f221-4bde-8ea2-75ac2344f18d\") " pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" Jan 17 00:31:04.355299 kubelet[3406]: I0117 00:31:04.355032 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/086626e6-23d7-433b-8fe2-380f0110d591-goldmane-key-pair\") pod \"goldmane-666569f655-jt8r9\" (UID: \"086626e6-23d7-433b-8fe2-380f0110d591\") " pod="calico-system/goldmane-666569f655-jt8r9" Jan 17 00:31:04.355299 kubelet[3406]: I0117 00:31:04.355062 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/441e897e-7cad-49ae-85a1-babdbbc91ee3-config-volume\") pod \"coredns-668d6bf9bc-gkzjm\" (UID: \"441e897e-7cad-49ae-85a1-babdbbc91ee3\") " pod="kube-system/coredns-668d6bf9bc-gkzjm" Jan 17 00:31:04.355299 kubelet[3406]: I0117 00:31:04.355079 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9cw4\" (UniqueName: \"kubernetes.io/projected/086626e6-23d7-433b-8fe2-380f0110d591-kube-api-access-q9cw4\") pod \"goldmane-666569f655-jt8r9\" (UID: \"086626e6-23d7-433b-8fe2-380f0110d591\") " pod="calico-system/goldmane-666569f655-jt8r9" Jan 17 00:31:04.355556 kubelet[3406]: I0117 00:31:04.355104 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a5246904-0f9d-4a5a-ba58-a0d97b0128df-calico-apiserver-certs\") pod \"calico-apiserver-7bd4f66f9c-4tl94\" (UID: \"a5246904-0f9d-4a5a-ba58-a0d97b0128df\") " pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" Jan 17 00:31:04.355556 kubelet[3406]: I0117 00:31:04.355132 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-whisker-backend-key-pair\") pod \"whisker-6b9fcdf797-tdsrd\" (UID: \"a2b3a57c-b25a-48a7-ad7c-540b6859bcb1\") " pod="calico-system/whisker-6b9fcdf797-tdsrd" Jan 17 00:31:04.355556 kubelet[3406]: I0117 00:31:04.355152 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4q62\" (UniqueName: \"kubernetes.io/projected/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-kube-api-access-b4q62\") pod \"whisker-6b9fcdf797-tdsrd\" (UID: \"a2b3a57c-b25a-48a7-ad7c-540b6859bcb1\") " pod="calico-system/whisker-6b9fcdf797-tdsrd" Jan 17 00:31:04.355556 kubelet[3406]: I0117 00:31:04.355175 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp24j\" (UniqueName: \"kubernetes.io/projected/a5246904-0f9d-4a5a-ba58-a0d97b0128df-kube-api-access-sp24j\") pod \"calico-apiserver-7bd4f66f9c-4tl94\" (UID: \"a5246904-0f9d-4a5a-ba58-a0d97b0128df\") " pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" Jan 17 00:31:04.355556 kubelet[3406]: I0117 00:31:04.355202 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gqwh\" (UniqueName: \"kubernetes.io/projected/3b534c16-0d44-4e13-804d-f2f891a56a96-kube-api-access-4gqwh\") pod \"coredns-668d6bf9bc-dq7hz\" (UID: \"3b534c16-0d44-4e13-804d-f2f891a56a96\") " pod="kube-system/coredns-668d6bf9bc-dq7hz" Jan 17 00:31:04.355701 kubelet[3406]: I0117 00:31:04.355224 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/086626e6-23d7-433b-8fe2-380f0110d591-config\") pod \"goldmane-666569f655-jt8r9\" (UID: \"086626e6-23d7-433b-8fe2-380f0110d591\") " pod="calico-system/goldmane-666569f655-jt8r9" Jan 17 00:31:04.355701 kubelet[3406]: I0117 00:31:04.355249 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4cec6c0e-e80c-4688-94c8-dc0543670d3f-calico-apiserver-certs\") pod \"calico-apiserver-7bd4f66f9c-79jbf\" (UID: \"4cec6c0e-e80c-4688-94c8-dc0543670d3f\") " pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" Jan 17 00:31:04.355701 kubelet[3406]: I0117 00:31:04.355272 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-whisker-ca-bundle\") pod \"whisker-6b9fcdf797-tdsrd\" (UID: \"a2b3a57c-b25a-48a7-ad7c-540b6859bcb1\") " pod="calico-system/whisker-6b9fcdf797-tdsrd" Jan 17 00:31:04.355701 kubelet[3406]: I0117 00:31:04.355311 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r87v4\" (UniqueName: \"kubernetes.io/projected/441e897e-7cad-49ae-85a1-babdbbc91ee3-kube-api-access-r87v4\") pod \"coredns-668d6bf9bc-gkzjm\" (UID: \"441e897e-7cad-49ae-85a1-babdbbc91ee3\") " pod="kube-system/coredns-668d6bf9bc-gkzjm" Jan 17 00:31:04.355701 kubelet[3406]: I0117 00:31:04.355354 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f248d2c0-f221-4bde-8ea2-75ac2344f18d-tigera-ca-bundle\") pod \"calico-kube-controllers-7fddb47c6b-xwhmv\" (UID: \"f248d2c0-f221-4bde-8ea2-75ac2344f18d\") " pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" Jan 17 00:31:04.355864 kubelet[3406]: I0117 00:31:04.355397 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prbwh\" (UniqueName: \"kubernetes.io/projected/4cec6c0e-e80c-4688-94c8-dc0543670d3f-kube-api-access-prbwh\") pod \"calico-apiserver-7bd4f66f9c-79jbf\" (UID: \"4cec6c0e-e80c-4688-94c8-dc0543670d3f\") " pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" Jan 17 00:31:04.536031 containerd[1828]: time="2026-01-17T00:31:04.535971119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gkzjm,Uid:441e897e-7cad-49ae-85a1-babdbbc91ee3,Namespace:kube-system,Attempt:0,}" Jan 17 00:31:04.547977 containerd[1828]: time="2026-01-17T00:31:04.547920099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fddb47c6b-xwhmv,Uid:f248d2c0-f221-4bde-8ea2-75ac2344f18d,Namespace:calico-system,Attempt:0,}" Jan 17 00:31:04.551259 containerd[1828]: time="2026-01-17T00:31:04.551214076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jt8r9,Uid:086626e6-23d7-433b-8fe2-380f0110d591,Namespace:calico-system,Attempt:0,}" Jan 17 00:31:04.555152 containerd[1828]: time="2026-01-17T00:31:04.555101167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd4f66f9c-4tl94,Uid:a5246904-0f9d-4a5a-ba58-a0d97b0128df,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:31:04.558210 containerd[1828]: time="2026-01-17T00:31:04.558170839Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd4f66f9c-79jbf,Uid:4cec6c0e-e80c-4688-94c8-dc0543670d3f,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:31:04.817520 containerd[1828]: time="2026-01-17T00:31:04.817356813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7hz,Uid:3b534c16-0d44-4e13-804d-f2f891a56a96,Namespace:kube-system,Attempt:0,}" Jan 17 00:31:04.823123 containerd[1828]: time="2026-01-17T00:31:04.823073447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b9fcdf797-tdsrd,Uid:a2b3a57c-b25a-48a7-ad7c-540b6859bcb1,Namespace:calico-system,Attempt:0,}" Jan 17 00:31:04.834060 containerd[1828]: time="2026-01-17T00:31:04.833649595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnm26,Uid:a7052c5c-a862-4e62-a623-7782ea46a871,Namespace:calico-system,Attempt:0,}" Jan 17 00:31:10.421402 containerd[1828]: time="2026-01-17T00:31:07.027344795Z" level=error msg="collecting metrics for 80e6acd0c0e6092ca66d744b0efed96117cc488ff4255b61246739369744e2c4" error="cgroups: cgroup deleted: unknown" Jan 17 00:31:10.422486 kubelet[3406]: I0117 00:31:07.733415 3406 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 00:31:10.768189 containerd[1828]: time="2026-01-17T00:31:10.767720495Z" level=info msg="shim disconnected" id=80e6acd0c0e6092ca66d744b0efed96117cc488ff4255b61246739369744e2c4 namespace=k8s.io Jan 17 00:31:10.768189 containerd[1828]: time="2026-01-17T00:31:10.767823398Z" level=warning msg="cleaning up after shim disconnected" id=80e6acd0c0e6092ca66d744b0efed96117cc488ff4255b61246739369744e2c4 namespace=k8s.io Jan 17 00:31:10.768189 containerd[1828]: time="2026-01-17T00:31:10.767841098Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:31:11.003551 containerd[1828]: time="2026-01-17T00:31:11.003299915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:31:13.484839 containerd[1828]: time="2026-01-17T00:31:13.484718814Z" level=error msg="Failed to destroy network for sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.485718 containerd[1828]: time="2026-01-17T00:31:13.485488533Z" level=error msg="encountered an error cleaning up failed sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.485718 containerd[1828]: time="2026-01-17T00:31:13.485579435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd4f66f9c-79jbf,Uid:4cec6c0e-e80c-4688-94c8-dc0543670d3f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.486050 kubelet[3406]: E0117 00:31:13.486000 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.486574 kubelet[3406]: E0117 00:31:13.486179 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" Jan 17 00:31:13.486574 kubelet[3406]: E0117 00:31:13.486222 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" Jan 17 00:31:13.486574 kubelet[3406]: E0117 00:31:13.486299 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bd4f66f9c-79jbf_calico-apiserver(4cec6c0e-e80c-4688-94c8-dc0543670d3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bd4f66f9c-79jbf_calico-apiserver(4cec6c0e-e80c-4688-94c8-dc0543670d3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:31:13.577590 containerd[1828]: time="2026-01-17T00:31:13.577522501Z" level=error msg="Failed to destroy network for sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.578014 containerd[1828]: time="2026-01-17T00:31:13.577919511Z" level=error msg="encountered an error cleaning up failed sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.578111 containerd[1828]: time="2026-01-17T00:31:13.578051814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7hz,Uid:3b534c16-0d44-4e13-804d-f2f891a56a96,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.578409 kubelet[3406]: E0117 00:31:13.578359 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.578523 kubelet[3406]: E0117 00:31:13.578446 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dq7hz" Jan 17 00:31:13.578523 kubelet[3406]: E0117 00:31:13.578480 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dq7hz" Jan 17 00:31:13.578620 kubelet[3406]: E0117 00:31:13.578548 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dq7hz_kube-system(3b534c16-0d44-4e13-804d-f2f891a56a96)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dq7hz_kube-system(3b534c16-0d44-4e13-804d-f2f891a56a96)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dq7hz" podUID="3b534c16-0d44-4e13-804d-f2f891a56a96" Jan 17 00:31:13.734218 containerd[1828]: time="2026-01-17T00:31:13.734142644Z" level=error msg="Failed to destroy network for sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.734580 containerd[1828]: time="2026-01-17T00:31:13.734541054Z" level=error msg="encountered an error cleaning up failed sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.734669 containerd[1828]: time="2026-01-17T00:31:13.734613256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fddb47c6b-xwhmv,Uid:f248d2c0-f221-4bde-8ea2-75ac2344f18d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.735069 kubelet[3406]: E0117 00:31:13.734941 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.735069 kubelet[3406]: E0117 00:31:13.735038 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" Jan 17 00:31:13.735228 kubelet[3406]: E0117 00:31:13.735070 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" Jan 17 00:31:13.735228 kubelet[3406]: E0117 00:31:13.735138 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7fddb47c6b-xwhmv_calico-system(f248d2c0-f221-4bde-8ea2-75ac2344f18d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7fddb47c6b-xwhmv_calico-system(f248d2c0-f221-4bde-8ea2-75ac2344f18d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:31:13.841722 containerd[1828]: time="2026-01-17T00:31:13.840932064Z" level=error msg="Failed to destroy network for sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.841722 containerd[1828]: time="2026-01-17T00:31:13.841457177Z" level=error msg="encountered an error cleaning up failed sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.841722 containerd[1828]: time="2026-01-17T00:31:13.841539879Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gkzjm,Uid:441e897e-7cad-49ae-85a1-babdbbc91ee3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.842032 kubelet[3406]: E0117 00:31:13.841959 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.842108 kubelet[3406]: E0117 00:31:13.842047 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gkzjm" Jan 17 00:31:13.842108 kubelet[3406]: E0117 00:31:13.842080 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gkzjm" Jan 17 00:31:13.843703 kubelet[3406]: E0117 00:31:13.842213 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gkzjm_kube-system(441e897e-7cad-49ae-85a1-babdbbc91ee3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gkzjm_kube-system(441e897e-7cad-49ae-85a1-babdbbc91ee3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gkzjm" podUID="441e897e-7cad-49ae-85a1-babdbbc91ee3" Jan 17 00:31:13.903970 containerd[1828]: time="2026-01-17T00:31:13.903900009Z" level=error msg="Failed to destroy network for sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.904395 containerd[1828]: time="2026-01-17T00:31:13.904354620Z" level=error msg="encountered an error cleaning up failed sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.904523 containerd[1828]: time="2026-01-17T00:31:13.904430822Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jt8r9,Uid:086626e6-23d7-433b-8fe2-380f0110d591,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.904823 kubelet[3406]: E0117 00:31:13.904772 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.904975 kubelet[3406]: E0117 00:31:13.904860 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jt8r9" Jan 17 00:31:13.904975 kubelet[3406]: E0117 00:31:13.904892 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jt8r9" Jan 17 00:31:13.905111 kubelet[3406]: E0117 00:31:13.904964 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-jt8r9_calico-system(086626e6-23d7-433b-8fe2-380f0110d591)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-jt8r9_calico-system(086626e6-23d7-433b-8fe2-380f0110d591)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:31:13.940277 containerd[1828]: time="2026-01-17T00:31:13.940213600Z" level=error msg="Failed to destroy network for sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.940686 containerd[1828]: time="2026-01-17T00:31:13.940625510Z" level=error msg="encountered an error cleaning up failed sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.940865 containerd[1828]: time="2026-01-17T00:31:13.940714412Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd4f66f9c-4tl94,Uid:a5246904-0f9d-4a5a-ba58-a0d97b0128df,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.941120 kubelet[3406]: E0117 00:31:13.941064 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:13.941213 kubelet[3406]: E0117 00:31:13.941158 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" Jan 17 00:31:13.941213 kubelet[3406]: E0117 00:31:13.941192 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" Jan 17 00:31:13.941306 kubelet[3406]: E0117 00:31:13.941255 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bd4f66f9c-4tl94_calico-apiserver(a5246904-0f9d-4a5a-ba58-a0d97b0128df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bd4f66f9c-4tl94_calico-apiserver(a5246904-0f9d-4a5a-ba58-a0d97b0128df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:31:13.983710 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe-shm.mount: Deactivated successfully. Jan 17 00:31:13.983964 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2-shm.mount: Deactivated successfully. Jan 17 00:31:13.984835 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5-shm.mount: Deactivated successfully. Jan 17 00:31:14.010775 kubelet[3406]: I0117 00:31:14.009480 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:31:14.016771 containerd[1828]: time="2026-01-17T00:31:14.016707077Z" level=info msg="StopPodSandbox for \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\"" Jan 17 00:31:14.018440 containerd[1828]: time="2026-01-17T00:31:14.018067410Z" level=info msg="Ensure that sandbox 94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2 in task-service has been cleanup successfully" Jan 17 00:31:14.020542 kubelet[3406]: I0117 00:31:14.019002 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:31:14.022570 containerd[1828]: time="2026-01-17T00:31:14.022523119Z" level=info msg="StopPodSandbox for \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\"" Jan 17 00:31:14.023217 containerd[1828]: time="2026-01-17T00:31:14.023192136Z" level=info msg="Ensure that sandbox 5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d in task-service has been cleanup successfully" Jan 17 00:31:14.025847 kubelet[3406]: I0117 00:31:14.025821 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:31:14.028725 containerd[1828]: time="2026-01-17T00:31:14.028678470Z" level=info msg="StopPodSandbox for \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\"" Jan 17 00:31:14.029415 containerd[1828]: time="2026-01-17T00:31:14.029379888Z" level=info msg="Ensure that sandbox 36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f in task-service has been cleanup successfully" Jan 17 00:31:14.046176 kubelet[3406]: I0117 00:31:14.046118 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:31:14.051450 containerd[1828]: time="2026-01-17T00:31:14.051380627Z" level=info msg="StopPodSandbox for \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\"" Jan 17 00:31:14.055950 containerd[1828]: time="2026-01-17T00:31:14.055605131Z" level=info msg="Ensure that sandbox 29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5 in task-service has been cleanup successfully" Jan 17 00:31:14.063582 kubelet[3406]: I0117 00:31:14.062863 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:31:14.072696 containerd[1828]: time="2026-01-17T00:31:14.072063535Z" level=info msg="StopPodSandbox for \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\"" Jan 17 00:31:14.072696 containerd[1828]: time="2026-01-17T00:31:14.072325841Z" level=info msg="Ensure that sandbox 78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206 in task-service has been cleanup successfully" Jan 17 00:31:14.105381 kubelet[3406]: I0117 00:31:14.104480 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:31:14.110506 containerd[1828]: time="2026-01-17T00:31:14.110440476Z" level=info msg="StopPodSandbox for \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\"" Jan 17 00:31:14.114106 containerd[1828]: time="2026-01-17T00:31:14.114038165Z" level=info msg="Ensure that sandbox 558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe in task-service has been cleanup successfully" Jan 17 00:31:14.234375 containerd[1828]: time="2026-01-17T00:31:14.233172188Z" level=error msg="StopPodSandbox for \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\" failed" error="failed to destroy network for sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.235358 kubelet[3406]: E0117 00:31:14.235105 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:31:14.235358 kubelet[3406]: E0117 00:31:14.235192 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5"} Jan 17 00:31:14.235358 kubelet[3406]: E0117 00:31:14.235277 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4cec6c0e-e80c-4688-94c8-dc0543670d3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:31:14.235358 kubelet[3406]: E0117 00:31:14.235312 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4cec6c0e-e80c-4688-94c8-dc0543670d3f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:31:14.236229 containerd[1828]: time="2026-01-17T00:31:14.236174661Z" level=error msg="StopPodSandbox for \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\" failed" error="failed to destroy network for sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.236643 kubelet[3406]: E0117 00:31:14.236594 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:31:14.236826 kubelet[3406]: E0117 00:31:14.236660 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2"} Jan 17 00:31:14.236826 kubelet[3406]: E0117 00:31:14.236706 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"441e897e-7cad-49ae-85a1-babdbbc91ee3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:31:14.236826 kubelet[3406]: E0117 00:31:14.236752 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"441e897e-7cad-49ae-85a1-babdbbc91ee3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gkzjm" podUID="441e897e-7cad-49ae-85a1-babdbbc91ee3" Jan 17 00:31:14.243874 containerd[1828]: time="2026-01-17T00:31:14.243693646Z" level=error msg="StopPodSandbox for \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\" failed" error="failed to destroy network for sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.244804 kubelet[3406]: E0117 00:31:14.244042 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:31:14.244804 kubelet[3406]: E0117 00:31:14.244121 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe"} Jan 17 00:31:14.244804 kubelet[3406]: E0117 00:31:14.244182 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f248d2c0-f221-4bde-8ea2-75ac2344f18d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:31:14.244804 kubelet[3406]: E0117 00:31:14.244222 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f248d2c0-f221-4bde-8ea2-75ac2344f18d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:31:14.246470 containerd[1828]: time="2026-01-17T00:31:14.246413713Z" level=error msg="StopPodSandbox for \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\" failed" error="failed to destroy network for sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.247447 kubelet[3406]: E0117 00:31:14.247364 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:31:14.247563 kubelet[3406]: E0117 00:31:14.247468 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d"} Jan 17 00:31:14.247563 kubelet[3406]: E0117 00:31:14.247531 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a5246904-0f9d-4a5a-ba58-a0d97b0128df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:31:14.247698 kubelet[3406]: E0117 00:31:14.247578 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a5246904-0f9d-4a5a-ba58-a0d97b0128df\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:31:14.271047 containerd[1828]: time="2026-01-17T00:31:14.270725209Z" level=error msg="StopPodSandbox for \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\" failed" error="failed to destroy network for sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.271573 kubelet[3406]: E0117 00:31:14.271127 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:31:14.271573 kubelet[3406]: E0117 00:31:14.271207 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f"} Jan 17 00:31:14.271573 kubelet[3406]: E0117 00:31:14.271285 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b534c16-0d44-4e13-804d-f2f891a56a96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:31:14.271573 kubelet[3406]: E0117 00:31:14.271324 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b534c16-0d44-4e13-804d-f2f891a56a96\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dq7hz" podUID="3b534c16-0d44-4e13-804d-f2f891a56a96" Jan 17 00:31:14.285762 containerd[1828]: time="2026-01-17T00:31:14.285533272Z" level=error msg="Failed to destroy network for sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.288100 containerd[1828]: time="2026-01-17T00:31:14.287101911Z" level=error msg="encountered an error cleaning up failed sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.289430 containerd[1828]: time="2026-01-17T00:31:14.289340166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6b9fcdf797-tdsrd,Uid:a2b3a57c-b25a-48a7-ad7c-540b6859bcb1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.292673 kubelet[3406]: E0117 00:31:14.291066 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.292673 kubelet[3406]: E0117 00:31:14.291158 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b9fcdf797-tdsrd" Jan 17 00:31:14.292673 kubelet[3406]: E0117 00:31:14.291202 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6b9fcdf797-tdsrd" Jan 17 00:31:14.293073 kubelet[3406]: E0117 00:31:14.291275 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6b9fcdf797-tdsrd_calico-system(a2b3a57c-b25a-48a7-ad7c-540b6859bcb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6b9fcdf797-tdsrd_calico-system(a2b3a57c-b25a-48a7-ad7c-540b6859bcb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b9fcdf797-tdsrd" podUID="a2b3a57c-b25a-48a7-ad7c-540b6859bcb1" Jan 17 00:31:14.294523 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b-shm.mount: Deactivated successfully. Jan 17 00:31:14.304163 containerd[1828]: time="2026-01-17T00:31:14.304091128Z" level=error msg="StopPodSandbox for \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\" failed" error="failed to destroy network for sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.304902 kubelet[3406]: E0117 00:31:14.304652 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:31:14.304902 kubelet[3406]: E0117 00:31:14.304732 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206"} Jan 17 00:31:14.304902 kubelet[3406]: E0117 00:31:14.304805 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"086626e6-23d7-433b-8fe2-380f0110d591\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:31:14.304902 kubelet[3406]: E0117 00:31:14.304840 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"086626e6-23d7-433b-8fe2-380f0110d591\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:31:14.308940 containerd[1828]: time="2026-01-17T00:31:14.308891245Z" level=error msg="Failed to destroy network for sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.311238 containerd[1828]: time="2026-01-17T00:31:14.311055799Z" level=error msg="encountered an error cleaning up failed sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.311238 containerd[1828]: time="2026-01-17T00:31:14.311155801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnm26,Uid:a7052c5c-a862-4e62-a623-7782ea46a871,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.312974 kubelet[3406]: E0117 00:31:14.311491 3406 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:14.312974 kubelet[3406]: E0117 00:31:14.311580 3406 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bnm26" Jan 17 00:31:14.312974 kubelet[3406]: E0117 00:31:14.311614 3406 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bnm26" Jan 17 00:31:14.313448 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8-shm.mount: Deactivated successfully. Jan 17 00:31:14.316821 kubelet[3406]: E0117 00:31:14.316770 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bnm26_calico-system(a7052c5c-a862-4e62-a623-7782ea46a871)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bnm26_calico-system(a7052c5c-a862-4e62-a623-7782ea46a871)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:31:15.109264 kubelet[3406]: I0117 00:31:15.109224 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:15.112082 kubelet[3406]: I0117 00:31:15.111346 3406 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:31:15.114778 containerd[1828]: time="2026-01-17T00:31:15.114379108Z" level=info msg="StopPodSandbox for \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\"" Jan 17 00:31:15.114778 containerd[1828]: time="2026-01-17T00:31:15.114700816Z" level=info msg="Ensure that sandbox f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b in task-service has been cleanup successfully" Jan 17 00:31:15.127781 containerd[1828]: time="2026-01-17T00:31:15.127423128Z" level=info msg="StopPodSandbox for \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\"" Jan 17 00:31:15.128576 containerd[1828]: time="2026-01-17T00:31:15.128518955Z" level=info msg="Ensure that sandbox e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8 in task-service has been cleanup successfully" Jan 17 00:31:15.198191 containerd[1828]: time="2026-01-17T00:31:15.198115662Z" level=error msg="StopPodSandbox for \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\" failed" error="failed to destroy network for sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:15.198680 kubelet[3406]: E0117 00:31:15.198571 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:15.198680 kubelet[3406]: E0117 00:31:15.198660 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b"} Jan 17 00:31:15.198969 kubelet[3406]: E0117 00:31:15.198726 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2b3a57c-b25a-48a7-ad7c-540b6859bcb1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:31:15.200033 kubelet[3406]: E0117 00:31:15.198780 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2b3a57c-b25a-48a7-ad7c-540b6859bcb1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6b9fcdf797-tdsrd" podUID="a2b3a57c-b25a-48a7-ad7c-540b6859bcb1" Jan 17 00:31:15.200124 containerd[1828]: time="2026-01-17T00:31:15.200082011Z" level=error msg="StopPodSandbox for \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\" failed" error="failed to destroy network for sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:31:15.200401 kubelet[3406]: E0117 00:31:15.200347 3406 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:31:15.200493 kubelet[3406]: E0117 00:31:15.200419 3406 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8"} Jan 17 00:31:15.200493 kubelet[3406]: E0117 00:31:15.200465 3406 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a7052c5c-a862-4e62-a623-7782ea46a871\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:31:15.200619 kubelet[3406]: E0117 00:31:15.200501 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a7052c5c-a862-4e62-a623-7782ea46a871\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:31:23.542683 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480918747.mount: Deactivated successfully. Jan 17 00:31:23.574092 containerd[1828]: time="2026-01-17T00:31:23.574012808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:23.576654 containerd[1828]: time="2026-01-17T00:31:23.576574679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:31:23.580775 containerd[1828]: time="2026-01-17T00:31:23.580665092Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:23.586687 containerd[1828]: time="2026-01-17T00:31:23.586242146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:31:23.587997 containerd[1828]: time="2026-01-17T00:31:23.587860991Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 12.584423772s" Jan 17 00:31:23.588315 containerd[1828]: time="2026-01-17T00:31:23.588163999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:31:23.601513 containerd[1828]: time="2026-01-17T00:31:23.600766647Z" level=info msg="CreateContainer within sandbox \"f008dcfe99e47278b30828e99849c16f20b2e89dd4edd3a1e440d315c178daef\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:31:23.654211 containerd[1828]: time="2026-01-17T00:31:23.654146222Z" level=info msg="CreateContainer within sandbox \"f008dcfe99e47278b30828e99849c16f20b2e89dd4edd3a1e440d315c178daef\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"902fe3951c8dbd708e4474801ef8b1c8208b0954be8abeb0582ba92341509cca\"" Jan 17 00:31:23.655251 containerd[1828]: time="2026-01-17T00:31:23.655190251Z" level=info msg="StartContainer for \"902fe3951c8dbd708e4474801ef8b1c8208b0954be8abeb0582ba92341509cca\"" Jan 17 00:31:23.747142 containerd[1828]: time="2026-01-17T00:31:23.746930885Z" level=info msg="StartContainer for \"902fe3951c8dbd708e4474801ef8b1c8208b0954be8abeb0582ba92341509cca\" returns successfully" Jan 17 00:31:24.085224 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:31:24.085431 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:31:24.407385 kubelet[3406]: I0117 00:31:24.405490 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wj5t5" podStartSLOduration=1.6338504600000001 podStartE2EDuration="32.40545548s" podCreationTimestamp="2026-01-17 00:30:52 +0000 UTC" firstStartedPulling="2026-01-17 00:30:52.817732611 +0000 UTC m=+26.107086630" lastFinishedPulling="2026-01-17 00:31:23.589337531 +0000 UTC m=+56.878691650" observedRunningTime="2026-01-17 00:31:24.205451454 +0000 UTC m=+57.494805573" watchObservedRunningTime="2026-01-17 00:31:24.40545548 +0000 UTC m=+57.694809499" Jan 17 00:31:24.410980 containerd[1828]: time="2026-01-17T00:31:24.410933731Z" level=info msg="StopPodSandbox for \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\"" Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.554 [INFO][4613] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.555 [INFO][4613] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" iface="eth0" netns="/var/run/netns/cni-f81d5506-8f25-07e8-fc4b-fee11fafc6e3" Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.555 [INFO][4613] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" iface="eth0" netns="/var/run/netns/cni-f81d5506-8f25-07e8-fc4b-fee11fafc6e3" Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.556 [INFO][4613] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" iface="eth0" netns="/var/run/netns/cni-f81d5506-8f25-07e8-fc4b-fee11fafc6e3" Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.556 [INFO][4613] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.556 [INFO][4613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.627 [INFO][4620] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" HandleID="k8s-pod-network.f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--6b9fcdf797--tdsrd-eth0" Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.629 [INFO][4620] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.629 [INFO][4620] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.644 [WARNING][4620] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" HandleID="k8s-pod-network.f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--6b9fcdf797--tdsrd-eth0" Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.644 [INFO][4620] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" HandleID="k8s-pod-network.f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--6b9fcdf797--tdsrd-eth0" Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.646 [INFO][4620] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:24.655368 containerd[1828]: 2026-01-17 00:31:24.652 [INFO][4613] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:24.660005 containerd[1828]: time="2026-01-17T00:31:24.656812624Z" level=info msg="TearDown network for sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\" successfully" Jan 17 00:31:24.660005 containerd[1828]: time="2026-01-17T00:31:24.656855726Z" level=info msg="StopPodSandbox for \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\" returns successfully" Jan 17 00:31:24.669794 systemd[1]: run-netns-cni\x2df81d5506\x2d8f25\x2d07e8\x2dfc4b\x2dfee11fafc6e3.mount: Deactivated successfully. Jan 17 00:31:24.804604 kubelet[3406]: I0117 00:31:24.804542 3406 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-whisker-backend-key-pair\") pod \"a2b3a57c-b25a-48a7-ad7c-540b6859bcb1\" (UID: \"a2b3a57c-b25a-48a7-ad7c-540b6859bcb1\") " Jan 17 00:31:24.804818 kubelet[3406]: I0117 00:31:24.804634 3406 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4q62\" (UniqueName: \"kubernetes.io/projected/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-kube-api-access-b4q62\") pod \"a2b3a57c-b25a-48a7-ad7c-540b6859bcb1\" (UID: \"a2b3a57c-b25a-48a7-ad7c-540b6859bcb1\") " Jan 17 00:31:24.804818 kubelet[3406]: I0117 00:31:24.804675 3406 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-whisker-ca-bundle\") pod \"a2b3a57c-b25a-48a7-ad7c-540b6859bcb1\" (UID: \"a2b3a57c-b25a-48a7-ad7c-540b6859bcb1\") " Jan 17 00:31:24.805504 kubelet[3406]: I0117 00:31:24.805456 3406 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a2b3a57c-b25a-48a7-ad7c-540b6859bcb1" (UID: "a2b3a57c-b25a-48a7-ad7c-540b6859bcb1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:31:24.809625 kubelet[3406]: I0117 00:31:24.809574 3406 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a2b3a57c-b25a-48a7-ad7c-540b6859bcb1" (UID: "a2b3a57c-b25a-48a7-ad7c-540b6859bcb1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:31:24.811913 kubelet[3406]: I0117 00:31:24.811541 3406 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-kube-api-access-b4q62" (OuterVolumeSpecName: "kube-api-access-b4q62") pod "a2b3a57c-b25a-48a7-ad7c-540b6859bcb1" (UID: "a2b3a57c-b25a-48a7-ad7c-540b6859bcb1"). InnerVolumeSpecName "kube-api-access-b4q62". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:31:24.815527 systemd[1]: var-lib-kubelet-pods-a2b3a57c\x2db25a\x2d48a7\x2dad7c\x2d540b6859bcb1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db4q62.mount: Deactivated successfully. Jan 17 00:31:24.816010 systemd[1]: var-lib-kubelet-pods-a2b3a57c\x2db25a\x2d48a7\x2dad7c\x2d540b6859bcb1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:31:24.906415 kubelet[3406]: I0117 00:31:24.906291 3406 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b4q62\" (UniqueName: \"kubernetes.io/projected/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-kube-api-access-b4q62\") on node \"ci-4081.3.6-n-2e1a0c4804\" DevicePath \"\"" Jan 17 00:31:24.906415 kubelet[3406]: I0117 00:31:24.906343 3406 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-whisker-ca-bundle\") on node \"ci-4081.3.6-n-2e1a0c4804\" DevicePath \"\"" Jan 17 00:31:24.906415 kubelet[3406]: I0117 00:31:24.906356 3406 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-2e1a0c4804\" DevicePath \"\"" Jan 17 00:31:25.195713 systemd[1]: run-containerd-runc-k8s.io-902fe3951c8dbd708e4474801ef8b1c8208b0954be8abeb0582ba92341509cca-runc.dLaxIL.mount: Deactivated successfully. Jan 17 00:31:25.413617 kubelet[3406]: I0117 00:31:25.413536 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b534c0b-2a92-45dc-b919-720218923434-whisker-backend-key-pair\") pod \"whisker-8dc795d65-glbln\" (UID: \"0b534c0b-2a92-45dc-b919-720218923434\") " pod="calico-system/whisker-8dc795d65-glbln" Jan 17 00:31:25.413617 kubelet[3406]: I0117 00:31:25.413620 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b534c0b-2a92-45dc-b919-720218923434-whisker-ca-bundle\") pod \"whisker-8dc795d65-glbln\" (UID: \"0b534c0b-2a92-45dc-b919-720218923434\") " pod="calico-system/whisker-8dc795d65-glbln" Jan 17 00:31:25.414360 kubelet[3406]: I0117 00:31:25.413660 3406 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgdct\" (UniqueName: \"kubernetes.io/projected/0b534c0b-2a92-45dc-b919-720218923434-kube-api-access-jgdct\") pod \"whisker-8dc795d65-glbln\" (UID: \"0b534c0b-2a92-45dc-b919-720218923434\") " pod="calico-system/whisker-8dc795d65-glbln" Jan 17 00:31:25.583582 containerd[1828]: time="2026-01-17T00:31:25.583420526Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8dc795d65-glbln,Uid:0b534c0b-2a92-45dc-b919-720218923434,Namespace:calico-system,Attempt:0,}" Jan 17 00:31:25.751649 systemd-networkd[1399]: calid2c00df6451: Link UP Jan 17 00:31:25.752236 systemd-networkd[1399]: calid2c00df6451: Gained carrier Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.654 [INFO][4662] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.665 [INFO][4662] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0 whisker-8dc795d65- calico-system 0b534c0b-2a92-45dc-b919-720218923434 913 0 2026-01-17 00:31:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:8dc795d65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-2e1a0c4804 whisker-8dc795d65-glbln eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid2c00df6451 [] [] }} ContainerID="8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" Namespace="calico-system" Pod="whisker-8dc795d65-glbln" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.666 [INFO][4662] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" Namespace="calico-system" Pod="whisker-8dc795d65-glbln" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.699 [INFO][4675] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" HandleID="k8s-pod-network.8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.699 [INFO][4675] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" HandleID="k8s-pod-network.8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2e1a0c4804", "pod":"whisker-8dc795d65-glbln", "timestamp":"2026-01-17 00:31:25.699083121 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2e1a0c4804", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.699 [INFO][4675] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.699 [INFO][4675] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.699 [INFO][4675] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2e1a0c4804' Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.706 [INFO][4675] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.713 [INFO][4675] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.717 [INFO][4675] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.719 [INFO][4675] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.721 [INFO][4675] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.721 [INFO][4675] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.723 [INFO][4675] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.729 [INFO][4675] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.735 [INFO][4675] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.193/26] block=192.168.121.192/26 handle="k8s-pod-network.8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.735 [INFO][4675] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.193/26] handle="k8s-pod-network.8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.736 [INFO][4675] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:25.776919 containerd[1828]: 2026-01-17 00:31:25.736 [INFO][4675] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.193/26] IPv6=[] ContainerID="8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" HandleID="k8s-pod-network.8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0" Jan 17 00:31:25.779228 containerd[1828]: 2026-01-17 00:31:25.738 [INFO][4662] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" Namespace="calico-system" Pod="whisker-8dc795d65-glbln" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0", GenerateName:"whisker-8dc795d65-", Namespace:"calico-system", SelfLink:"", UID:"0b534c0b-2a92-45dc-b919-720218923434", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 31, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8dc795d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"", Pod:"whisker-8dc795d65-glbln", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid2c00df6451", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:25.779228 containerd[1828]: 2026-01-17 00:31:25.738 [INFO][4662] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.193/32] ContainerID="8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" Namespace="calico-system" Pod="whisker-8dc795d65-glbln" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0" Jan 17 00:31:25.779228 containerd[1828]: 2026-01-17 00:31:25.738 [INFO][4662] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2c00df6451 ContainerID="8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" Namespace="calico-system" Pod="whisker-8dc795d65-glbln" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0" Jan 17 00:31:25.779228 containerd[1828]: 2026-01-17 00:31:25.749 [INFO][4662] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" Namespace="calico-system" Pod="whisker-8dc795d65-glbln" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0" Jan 17 00:31:25.779228 containerd[1828]: 2026-01-17 00:31:25.750 [INFO][4662] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" Namespace="calico-system" Pod="whisker-8dc795d65-glbln" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0", GenerateName:"whisker-8dc795d65-", Namespace:"calico-system", SelfLink:"", UID:"0b534c0b-2a92-45dc-b919-720218923434", ResourceVersion:"913", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 31, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"8dc795d65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb", Pod:"whisker-8dc795d65-glbln", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid2c00df6451", MAC:"8a:f6:2c:91:d8:e0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:25.779228 containerd[1828]: 2026-01-17 00:31:25.772 [INFO][4662] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb" Namespace="calico-system" Pod="whisker-8dc795d65-glbln" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--8dc795d65--glbln-eth0" Jan 17 00:31:25.808854 containerd[1828]: time="2026-01-17T00:31:25.808286838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:31:25.809640 containerd[1828]: time="2026-01-17T00:31:25.808401442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:31:25.809640 containerd[1828]: time="2026-01-17T00:31:25.808425842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:25.809640 containerd[1828]: time="2026-01-17T00:31:25.808568246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:25.984514 containerd[1828]: time="2026-01-17T00:31:25.982789660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-8dc795d65-glbln,Uid:0b534c0b-2a92-45dc-b919-720218923434,Namespace:calico-system,Attempt:0,} returns sandbox id \"8cb0592b2c6856e19efc098317a64d1db8110311317ef65017fb69f6d97ea7fb\"" Jan 17 00:31:25.993858 containerd[1828]: time="2026-01-17T00:31:25.993807264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:31:26.237778 kernel: bpftool[4826]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:31:26.245665 containerd[1828]: time="2026-01-17T00:31:26.245600521Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:26.248663 containerd[1828]: time="2026-01-17T00:31:26.248587603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:31:26.248821 containerd[1828]: time="2026-01-17T00:31:26.248755908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:31:26.249454 kubelet[3406]: E0117 00:31:26.248996 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:31:26.249454 kubelet[3406]: E0117 00:31:26.249080 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:31:26.250893 kubelet[3406]: E0117 00:31:26.249289 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b7d7352bb0c64a4eb1262e2afe0300e5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jgdct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8dc795d65-glbln_calico-system(0b534c0b-2a92-45dc-b919-720218923434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:26.252204 containerd[1828]: time="2026-01-17T00:31:26.252169502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:31:26.502267 containerd[1828]: time="2026-01-17T00:31:26.502095108Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:26.506768 containerd[1828]: time="2026-01-17T00:31:26.505890612Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:31:26.506768 containerd[1828]: time="2026-01-17T00:31:26.506029616Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:31:26.507004 kubelet[3406]: E0117 00:31:26.506273 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:31:26.507004 kubelet[3406]: E0117 00:31:26.506355 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:31:26.507528 kubelet[3406]: E0117 00:31:26.506535 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgdct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8dc795d65-glbln_calico-system(0b534c0b-2a92-45dc-b919-720218923434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:26.508002 kubelet[3406]: E0117 00:31:26.507948 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:31:26.719844 systemd-networkd[1399]: vxlan.calico: Link UP Jan 17 00:31:26.719855 systemd-networkd[1399]: vxlan.calico: Gained carrier Jan 17 00:31:26.824472 containerd[1828]: time="2026-01-17T00:31:26.823878298Z" level=info msg="StopPodSandbox for \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\"" Jan 17 00:31:26.840539 containerd[1828]: time="2026-01-17T00:31:26.837776582Z" level=info msg="StopPodSandbox for \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\"" Jan 17 00:31:26.852778 kubelet[3406]: I0117 00:31:26.850578 3406 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2b3a57c-b25a-48a7-ad7c-540b6859bcb1" path="/var/lib/kubelet/pods/a2b3a57c-b25a-48a7-ad7c-540b6859bcb1/volumes" Jan 17 00:31:26.902680 systemd-networkd[1399]: calid2c00df6451: Gained IPv6LL Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:26.920 [WARNING][4904] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--6b9fcdf797--tdsrd-eth0" Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:26.920 [INFO][4904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:26.920 [INFO][4904] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" iface="eth0" netns="" Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:26.920 [INFO][4904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:26.920 [INFO][4904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:26.996 [INFO][4917] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" HandleID="k8s-pod-network.f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--6b9fcdf797--tdsrd-eth0" Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:26.996 [INFO][4917] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:26.996 [INFO][4917] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:27.013 [WARNING][4917] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" HandleID="k8s-pod-network.f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--6b9fcdf797--tdsrd-eth0" Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:27.013 [INFO][4917] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" HandleID="k8s-pod-network.f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--6b9fcdf797--tdsrd-eth0" Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:27.016 [INFO][4917] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:27.023258 containerd[1828]: 2026-01-17 00:31:27.020 [INFO][4904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:27.024374 containerd[1828]: time="2026-01-17T00:31:27.023368410Z" level=info msg="TearDown network for sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\" successfully" Jan 17 00:31:27.024374 containerd[1828]: time="2026-01-17T00:31:27.023957226Z" level=info msg="StopPodSandbox for \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\" returns successfully" Jan 17 00:31:27.025894 containerd[1828]: time="2026-01-17T00:31:27.025527369Z" level=info msg="RemovePodSandbox for \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\"" Jan 17 00:31:27.025894 containerd[1828]: time="2026-01-17T00:31:27.025577771Z" level=info msg="Forcibly stopping sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\"" Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:26.974 [INFO][4903] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:26.974 [INFO][4903] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" iface="eth0" netns="/var/run/netns/cni-53fadf52-5198-9ba2-cfcb-ca5017cb1a04" Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:26.974 [INFO][4903] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" iface="eth0" netns="/var/run/netns/cni-53fadf52-5198-9ba2-cfcb-ca5017cb1a04" Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:26.977 [INFO][4903] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" iface="eth0" netns="/var/run/netns/cni-53fadf52-5198-9ba2-cfcb-ca5017cb1a04" Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:26.977 [INFO][4903] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:26.977 [INFO][4903] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:27.045 [INFO][4924] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" HandleID="k8s-pod-network.e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:27.046 [INFO][4924] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:27.046 [INFO][4924] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:27.055 [WARNING][4924] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" HandleID="k8s-pod-network.e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:27.056 [INFO][4924] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" HandleID="k8s-pod-network.e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:27.059 [INFO][4924] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:27.071952 containerd[1828]: 2026-01-17 00:31:27.065 [INFO][4903] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:31:27.083208 containerd[1828]: time="2026-01-17T00:31:27.079055848Z" level=info msg="TearDown network for sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\" successfully" Jan 17 00:31:27.083208 containerd[1828]: time="2026-01-17T00:31:27.079131850Z" level=info msg="StopPodSandbox for \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\" returns successfully" Jan 17 00:31:27.084271 containerd[1828]: time="2026-01-17T00:31:27.083785879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnm26,Uid:a7052c5c-a862-4e62-a623-7782ea46a871,Namespace:calico-system,Attempt:1,}" Jan 17 00:31:27.089038 systemd[1]: run-netns-cni\x2d53fadf52\x2d5198\x2d9ba2\x2dcfcb\x2dca5017cb1a04.mount: Deactivated successfully. Jan 17 00:31:27.171882 kubelet[3406]: E0117 00:31:27.171583 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.114 [WARNING][4939] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--6b9fcdf797--tdsrd-eth0" Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.114 [INFO][4939] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.114 [INFO][4939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" iface="eth0" netns="" Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.114 [INFO][4939] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.114 [INFO][4939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.223 [INFO][4948] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" HandleID="k8s-pod-network.f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--6b9fcdf797--tdsrd-eth0" Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.226 [INFO][4948] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.226 [INFO][4948] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.236 [WARNING][4948] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" HandleID="k8s-pod-network.f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--6b9fcdf797--tdsrd-eth0" Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.236 [INFO][4948] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" HandleID="k8s-pod-network.f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-whisker--6b9fcdf797--tdsrd-eth0" Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.239 [INFO][4948] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:27.252177 containerd[1828]: 2026-01-17 00:31:27.243 [INFO][4939] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b" Jan 17 00:31:27.252177 containerd[1828]: time="2026-01-17T00:31:27.250262479Z" level=info msg="TearDown network for sandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\" successfully" Jan 17 00:31:27.273936 containerd[1828]: time="2026-01-17T00:31:27.271233758Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:31:27.273936 containerd[1828]: time="2026-01-17T00:31:27.271355261Z" level=info msg="RemovePodSandbox \"f91f5c72fa71a79fd02a098e7387a2e3b8f88f9f5c037b685552530eb4eee28b\" returns successfully" Jan 17 00:31:27.380931 systemd-networkd[1399]: cali5b4e2d8d2f6: Link UP Jan 17 00:31:27.384495 systemd-networkd[1399]: cali5b4e2d8d2f6: Gained carrier Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.286 [INFO][4954] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0 csi-node-driver- calico-system a7052c5c-a862-4e62-a623-7782ea46a871 931 0 2026-01-17 00:30:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-2e1a0c4804 csi-node-driver-bnm26 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5b4e2d8d2f6 [] [] }} ContainerID="5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" Namespace="calico-system" Pod="csi-node-driver-bnm26" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.286 [INFO][4954] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" Namespace="calico-system" Pod="csi-node-driver-bnm26" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.330 [INFO][4980] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" HandleID="k8s-pod-network.5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.330 [INFO][4980] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" HandleID="k8s-pod-network.5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5220), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2e1a0c4804", "pod":"csi-node-driver-bnm26", "timestamp":"2026-01-17 00:31:27.329999382 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2e1a0c4804", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.330 [INFO][4980] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.330 [INFO][4980] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.330 [INFO][4980] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2e1a0c4804' Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.340 [INFO][4980] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.345 [INFO][4980] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.352 [INFO][4980] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.354 [INFO][4980] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.356 [INFO][4980] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.356 [INFO][4980] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.358 [INFO][4980] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.366 [INFO][4980] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.374 [INFO][4980] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.194/26] block=192.168.121.192/26 handle="k8s-pod-network.5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.375 [INFO][4980] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.194/26] handle="k8s-pod-network.5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.375 [INFO][4980] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:27.421956 containerd[1828]: 2026-01-17 00:31:27.375 [INFO][4980] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.194/26] IPv6=[] ContainerID="5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" HandleID="k8s-pod-network.5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:31:27.424621 containerd[1828]: 2026-01-17 00:31:27.377 [INFO][4954] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" Namespace="calico-system" Pod="csi-node-driver-bnm26" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7052c5c-a862-4e62-a623-7782ea46a871", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"", Pod:"csi-node-driver-bnm26", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b4e2d8d2f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:27.424621 containerd[1828]: 2026-01-17 00:31:27.377 [INFO][4954] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.194/32] ContainerID="5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" Namespace="calico-system" Pod="csi-node-driver-bnm26" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:31:27.424621 containerd[1828]: 2026-01-17 00:31:27.377 [INFO][4954] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b4e2d8d2f6 ContainerID="5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" Namespace="calico-system" Pod="csi-node-driver-bnm26" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:31:27.424621 containerd[1828]: 2026-01-17 00:31:27.383 [INFO][4954] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" Namespace="calico-system" Pod="csi-node-driver-bnm26" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:31:27.424621 containerd[1828]: 2026-01-17 00:31:27.384 [INFO][4954] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" Namespace="calico-system" Pod="csi-node-driver-bnm26" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7052c5c-a862-4e62-a623-7782ea46a871", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e", Pod:"csi-node-driver-bnm26", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b4e2d8d2f6", MAC:"06:81:ba:dd:ae:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:27.424621 containerd[1828]: 2026-01-17 00:31:27.412 [INFO][4954] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e" Namespace="calico-system" Pod="csi-node-driver-bnm26" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:31:27.481006 containerd[1828]: time="2026-01-17T00:31:27.477147547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:31:27.481006 containerd[1828]: time="2026-01-17T00:31:27.477309952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:31:27.481006 containerd[1828]: time="2026-01-17T00:31:27.477387354Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:27.481006 containerd[1828]: time="2026-01-17T00:31:27.479827221Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:27.581156 containerd[1828]: time="2026-01-17T00:31:27.580774410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bnm26,Uid:a7052c5c-a862-4e62-a623-7782ea46a871,Namespace:calico-system,Attempt:1,} returns sandbox id \"5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e\"" Jan 17 00:31:27.584674 containerd[1828]: time="2026-01-17T00:31:27.584630217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:31:27.826432 containerd[1828]: time="2026-01-17T00:31:27.826341695Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:27.830274 containerd[1828]: time="2026-01-17T00:31:27.830206802Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:31:27.830549 containerd[1828]: time="2026-01-17T00:31:27.830359206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:31:27.831435 kubelet[3406]: E0117 00:31:27.830665 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:31:27.831435 kubelet[3406]: E0117 00:31:27.830751 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:31:27.831435 kubelet[3406]: E0117 00:31:27.830991 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nq5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnm26_calico-system(a7052c5c-a862-4e62-a623-7782ea46a871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:27.835010 containerd[1828]: time="2026-01-17T00:31:27.834968833Z" level=info msg="StopPodSandbox for \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\"" Jan 17 00:31:27.837050 containerd[1828]: time="2026-01-17T00:31:27.837000489Z" level=info msg="StopPodSandbox for \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\"" Jan 17 00:31:27.843035 containerd[1828]: time="2026-01-17T00:31:27.842733948Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:31:27.925949 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:27.946 [INFO][5082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:27.948 [INFO][5082] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" iface="eth0" netns="/var/run/netns/cni-51c8ee03-2d14-967a-b28b-7eb9bb894f8f" Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:27.950 [INFO][5082] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" iface="eth0" netns="/var/run/netns/cni-51c8ee03-2d14-967a-b28b-7eb9bb894f8f" Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:27.952 [INFO][5082] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" iface="eth0" netns="/var/run/netns/cni-51c8ee03-2d14-967a-b28b-7eb9bb894f8f" Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:27.952 [INFO][5082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:27.952 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:27.996 [INFO][5099] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" HandleID="k8s-pod-network.36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:27.996 [INFO][5099] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:27.996 [INFO][5099] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:28.004 [WARNING][5099] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" HandleID="k8s-pod-network.36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:28.004 [INFO][5099] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" HandleID="k8s-pod-network.36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:28.006 [INFO][5099] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:28.013677 containerd[1828]: 2026-01-17 00:31:28.008 [INFO][5082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:31:28.013677 containerd[1828]: time="2026-01-17T00:31:28.012412336Z" level=info msg="TearDown network for sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\" successfully" Jan 17 00:31:28.013677 containerd[1828]: time="2026-01-17T00:31:28.012453837Z" level=info msg="StopPodSandbox for \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\" returns successfully" Jan 17 00:31:28.014521 containerd[1828]: time="2026-01-17T00:31:28.013785674Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7hz,Uid:3b534c16-0d44-4e13-804d-f2f891a56a96,Namespace:kube-system,Attempt:1,}" Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:27.947 [INFO][5083] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:27.948 [INFO][5083] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" iface="eth0" netns="/var/run/netns/cni-0d2bcfd9-b82d-ac08-4dee-89537b8d51d7" Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:27.949 [INFO][5083] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" iface="eth0" netns="/var/run/netns/cni-0d2bcfd9-b82d-ac08-4dee-89537b8d51d7" Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:27.950 [INFO][5083] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" iface="eth0" netns="/var/run/netns/cni-0d2bcfd9-b82d-ac08-4dee-89537b8d51d7" Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:27.950 [INFO][5083] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:27.950 [INFO][5083] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:28.000 [INFO][5097] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" HandleID="k8s-pod-network.94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:28.000 [INFO][5097] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:28.006 [INFO][5097] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:28.018 [WARNING][5097] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" HandleID="k8s-pod-network.94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:28.018 [INFO][5097] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" HandleID="k8s-pod-network.94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:28.020 [INFO][5097] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:28.023159 containerd[1828]: 2026-01-17 00:31:28.021 [INFO][5083] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:31:28.023839 containerd[1828]: time="2026-01-17T00:31:28.023495042Z" level=info msg="TearDown network for sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\" successfully" Jan 17 00:31:28.023839 containerd[1828]: time="2026-01-17T00:31:28.023534443Z" level=info msg="StopPodSandbox for \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\" returns successfully" Jan 17 00:31:28.024615 containerd[1828]: time="2026-01-17T00:31:28.024580672Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gkzjm,Uid:441e897e-7cad-49ae-85a1-babdbbc91ee3,Namespace:kube-system,Attempt:1,}" Jan 17 00:31:28.085984 systemd[1]: run-netns-cni\x2d51c8ee03\x2d2d14\x2d967a\x2db28b\x2d7eb9bb894f8f.mount: Deactivated successfully. Jan 17 00:31:28.086327 systemd[1]: run-netns-cni\x2d0d2bcfd9\x2db82d\x2dac08\x2d4dee\x2d89537b8d51d7.mount: Deactivated successfully. Jan 17 00:31:28.110149 containerd[1828]: time="2026-01-17T00:31:28.110090235Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:28.113712 containerd[1828]: time="2026-01-17T00:31:28.113634433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:31:28.113913 containerd[1828]: time="2026-01-17T00:31:28.113827638Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:31:28.114138 kubelet[3406]: E0117 00:31:28.114074 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:31:28.114268 kubelet[3406]: E0117 00:31:28.114156 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:31:28.114408 kubelet[3406]: E0117 00:31:28.114347 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nq5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnm26_calico-system(a7052c5c-a862-4e62-a623-7782ea46a871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:28.116099 kubelet[3406]: E0117 00:31:28.116024 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:31:28.175517 kubelet[3406]: E0117 00:31:28.174963 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:31:28.291450 systemd-networkd[1399]: cali86f12d95b19: Link UP Jan 17 00:31:28.296150 systemd-networkd[1399]: cali86f12d95b19: Gained carrier Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.161 [INFO][5110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0 coredns-668d6bf9bc- kube-system 3b534c16-0d44-4e13-804d-f2f891a56a96 949 0 2026-01-17 00:30:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-2e1a0c4804 coredns-668d6bf9bc-dq7hz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali86f12d95b19 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7hz" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.161 [INFO][5110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7hz" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.227 [INFO][5134] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" HandleID="k8s-pod-network.76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.228 [INFO][5134] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" HandleID="k8s-pod-network.76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d98e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-2e1a0c4804", "pod":"coredns-668d6bf9bc-dq7hz", "timestamp":"2026-01-17 00:31:28.227858188 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2e1a0c4804", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.228 [INFO][5134] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.228 [INFO][5134] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.228 [INFO][5134] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2e1a0c4804' Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.238 [INFO][5134] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.245 [INFO][5134] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.250 [INFO][5134] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.252 [INFO][5134] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.254 [INFO][5134] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.254 [INFO][5134] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.262 [INFO][5134] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.272 [INFO][5134] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.280 [INFO][5134] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.195/26] block=192.168.121.192/26 handle="k8s-pod-network.76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.280 [INFO][5134] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.195/26] handle="k8s-pod-network.76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.280 [INFO][5134] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:28.317937 containerd[1828]: 2026-01-17 00:31:28.281 [INFO][5134] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.195/26] IPv6=[] ContainerID="76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" HandleID="k8s-pod-network.76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:31:28.319311 containerd[1828]: 2026-01-17 00:31:28.286 [INFO][5110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7hz" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3b534c16-0d44-4e13-804d-f2f891a56a96", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"", Pod:"coredns-668d6bf9bc-dq7hz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86f12d95b19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:28.319311 containerd[1828]: 2026-01-17 00:31:28.286 [INFO][5110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.195/32] ContainerID="76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7hz" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:31:28.319311 containerd[1828]: 2026-01-17 00:31:28.286 [INFO][5110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali86f12d95b19 ContainerID="76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7hz" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:31:28.319311 containerd[1828]: 2026-01-17 00:31:28.290 [INFO][5110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7hz" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:31:28.319311 containerd[1828]: 2026-01-17 00:31:28.290 [INFO][5110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7hz" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3b534c16-0d44-4e13-804d-f2f891a56a96", ResourceVersion:"949", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee", Pod:"coredns-668d6bf9bc-dq7hz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86f12d95b19", MAC:"12:93:e7:ae:67:e5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:28.319311 containerd[1828]: 2026-01-17 00:31:28.313 [INFO][5110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7hz" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:31:28.365113 containerd[1828]: time="2026-01-17T00:31:28.364860474Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:31:28.365113 containerd[1828]: time="2026-01-17T00:31:28.364939276Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:31:28.365113 containerd[1828]: time="2026-01-17T00:31:28.364953576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:28.367331 containerd[1828]: time="2026-01-17T00:31:28.366951331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:28.427808 systemd-networkd[1399]: calie771dcbaced: Link UP Jan 17 00:31:28.428028 systemd-networkd[1399]: calie771dcbaced: Gained carrier Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.170 [INFO][5119] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0 coredns-668d6bf9bc- kube-system 441e897e-7cad-49ae-85a1-babdbbc91ee3 948 0 2026-01-17 00:30:33 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-2e1a0c4804 coredns-668d6bf9bc-gkzjm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie771dcbaced [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" Namespace="kube-system" Pod="coredns-668d6bf9bc-gkzjm" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.171 [INFO][5119] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" Namespace="kube-system" Pod="coredns-668d6bf9bc-gkzjm" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.240 [INFO][5139] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" HandleID="k8s-pod-network.dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.241 [INFO][5139] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" HandleID="k8s-pod-network.dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5840), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-2e1a0c4804", "pod":"coredns-668d6bf9bc-gkzjm", "timestamp":"2026-01-17 00:31:28.24094065 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2e1a0c4804", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.241 [INFO][5139] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.280 [INFO][5139] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.280 [INFO][5139] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2e1a0c4804' Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.341 [INFO][5139] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.350 [INFO][5139] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.358 [INFO][5139] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.372 [INFO][5139] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.377 [INFO][5139] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.378 [INFO][5139] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.383 [INFO][5139] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.391 [INFO][5139] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.414 [INFO][5139] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.196/26] block=192.168.121.192/26 handle="k8s-pod-network.dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.416 [INFO][5139] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.196/26] handle="k8s-pod-network.dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.416 [INFO][5139] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:28.465893 containerd[1828]: 2026-01-17 00:31:28.416 [INFO][5139] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.196/26] IPv6=[] ContainerID="dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" HandleID="k8s-pod-network.dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:31:28.468400 containerd[1828]: 2026-01-17 00:31:28.421 [INFO][5119] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" Namespace="kube-system" Pod="coredns-668d6bf9bc-gkzjm" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"441e897e-7cad-49ae-85a1-babdbbc91ee3", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"", Pod:"coredns-668d6bf9bc-gkzjm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie771dcbaced", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:28.468400 containerd[1828]: 2026-01-17 00:31:28.421 [INFO][5119] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.196/32] ContainerID="dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" Namespace="kube-system" Pod="coredns-668d6bf9bc-gkzjm" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:31:28.468400 containerd[1828]: 2026-01-17 00:31:28.422 [INFO][5119] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie771dcbaced ContainerID="dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" Namespace="kube-system" Pod="coredns-668d6bf9bc-gkzjm" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:31:28.468400 containerd[1828]: 2026-01-17 00:31:28.426 [INFO][5119] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" Namespace="kube-system" Pod="coredns-668d6bf9bc-gkzjm" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:31:28.468400 containerd[1828]: 2026-01-17 00:31:28.433 [INFO][5119] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" Namespace="kube-system" Pod="coredns-668d6bf9bc-gkzjm" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"441e897e-7cad-49ae-85a1-babdbbc91ee3", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df", Pod:"coredns-668d6bf9bc-gkzjm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie771dcbaced", MAC:"6e:a2:79:ab:a5:15", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:28.468400 containerd[1828]: 2026-01-17 00:31:28.460 [INFO][5119] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df" Namespace="kube-system" Pod="coredns-668d6bf9bc-gkzjm" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:31:28.505390 systemd-networkd[1399]: cali5b4e2d8d2f6: Gained IPv6LL Jan 17 00:31:28.532615 containerd[1828]: time="2026-01-17T00:31:28.532477105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7hz,Uid:3b534c16-0d44-4e13-804d-f2f891a56a96,Namespace:kube-system,Attempt:1,} returns sandbox id \"76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee\"" Jan 17 00:31:28.539505 containerd[1828]: time="2026-01-17T00:31:28.539444997Z" level=info msg="CreateContainer within sandbox \"76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:31:28.547844 containerd[1828]: time="2026-01-17T00:31:28.546626396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:31:28.547844 containerd[1828]: time="2026-01-17T00:31:28.547555921Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:31:28.547844 containerd[1828]: time="2026-01-17T00:31:28.547575022Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:28.547844 containerd[1828]: time="2026-01-17T00:31:28.547700425Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:28.588631 containerd[1828]: time="2026-01-17T00:31:28.588570954Z" level=info msg="CreateContainer within sandbox \"76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a235963d4da6c7dc6a8cc8d6aa0b83326aa9bb194e6e3d07b5887850269f476f\"" Jan 17 00:31:28.589832 containerd[1828]: time="2026-01-17T00:31:28.589785088Z" level=info msg="StartContainer for \"a235963d4da6c7dc6a8cc8d6aa0b83326aa9bb194e6e3d07b5887850269f476f\"" Jan 17 00:31:28.638548 containerd[1828]: time="2026-01-17T00:31:28.637590609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gkzjm,Uid:441e897e-7cad-49ae-85a1-babdbbc91ee3,Namespace:kube-system,Attempt:1,} returns sandbox id \"dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df\"" Jan 17 00:31:28.647124 containerd[1828]: time="2026-01-17T00:31:28.647067571Z" level=info msg="CreateContainer within sandbox \"dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:31:28.688617 containerd[1828]: time="2026-01-17T00:31:28.688549317Z" level=info msg="StartContainer for \"a235963d4da6c7dc6a8cc8d6aa0b83326aa9bb194e6e3d07b5887850269f476f\" returns successfully" Jan 17 00:31:28.688617 containerd[1828]: time="2026-01-17T00:31:28.688561717Z" level=info msg="CreateContainer within sandbox \"dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3f5cf8015af0677260a000e1397731601a4cbba7bc372e10a51112935489a4bb\"" Jan 17 00:31:28.693438 containerd[1828]: time="2026-01-17T00:31:28.692140616Z" level=info msg="StartContainer for \"3f5cf8015af0677260a000e1397731601a4cbba7bc372e10a51112935489a4bb\"" Jan 17 00:31:28.825938 containerd[1828]: time="2026-01-17T00:31:28.825656405Z" level=info msg="StartContainer for \"3f5cf8015af0677260a000e1397731601a4cbba7bc372e10a51112935489a4bb\" returns successfully" Jan 17 00:31:28.838093 containerd[1828]: time="2026-01-17T00:31:28.838032747Z" level=info msg="StopPodSandbox for \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\"" Jan 17 00:31:28.842858 containerd[1828]: time="2026-01-17T00:31:28.840453214Z" level=info msg="StopPodSandbox for \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\"" Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:28.985 [INFO][5335] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:28.986 [INFO][5335] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" iface="eth0" netns="/var/run/netns/cni-bc9c052d-4faf-f8a4-990a-29ba8bd2ce09" Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:28.991 [INFO][5335] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" iface="eth0" netns="/var/run/netns/cni-bc9c052d-4faf-f8a4-990a-29ba8bd2ce09" Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:28.991 [INFO][5335] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" iface="eth0" netns="/var/run/netns/cni-bc9c052d-4faf-f8a4-990a-29ba8bd2ce09" Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:28.991 [INFO][5335] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:28.992 [INFO][5335] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:29.039 [INFO][5354] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" HandleID="k8s-pod-network.78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:29.040 [INFO][5354] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:29.040 [INFO][5354] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:29.049 [WARNING][5354] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" HandleID="k8s-pod-network.78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:29.049 [INFO][5354] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" HandleID="k8s-pod-network.78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:29.052 [INFO][5354] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:29.057891 containerd[1828]: 2026-01-17 00:31:29.055 [INFO][5335] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:31:29.058623 containerd[1828]: time="2026-01-17T00:31:29.058115127Z" level=info msg="TearDown network for sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\" successfully" Jan 17 00:31:29.058623 containerd[1828]: time="2026-01-17T00:31:29.058162429Z" level=info msg="StopPodSandbox for \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\" returns successfully" Jan 17 00:31:29.059309 containerd[1828]: time="2026-01-17T00:31:29.059269259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jt8r9,Uid:086626e6-23d7-433b-8fe2-380f0110d591,Namespace:calico-system,Attempt:1,}" Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.020 [INFO][5342] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.021 [INFO][5342] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" iface="eth0" netns="/var/run/netns/cni-ffc607b5-5ba0-396d-61e1-3ad48f00b064" Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.021 [INFO][5342] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" iface="eth0" netns="/var/run/netns/cni-ffc607b5-5ba0-396d-61e1-3ad48f00b064" Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.021 [INFO][5342] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" iface="eth0" netns="/var/run/netns/cni-ffc607b5-5ba0-396d-61e1-3ad48f00b064" Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.021 [INFO][5342] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.021 [INFO][5342] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.062 [INFO][5361] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" HandleID="k8s-pod-network.29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.062 [INFO][5361] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.062 [INFO][5361] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.070 [WARNING][5361] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" HandleID="k8s-pod-network.29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.070 [INFO][5361] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" HandleID="k8s-pod-network.29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.072 [INFO][5361] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:29.077931 containerd[1828]: 2026-01-17 00:31:29.074 [INFO][5342] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:31:29.077931 containerd[1828]: time="2026-01-17T00:31:29.077408761Z" level=info msg="TearDown network for sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\" successfully" Jan 17 00:31:29.077931 containerd[1828]: time="2026-01-17T00:31:29.077453562Z" level=info msg="StopPodSandbox for \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\" returns successfully" Jan 17 00:31:29.079910 containerd[1828]: time="2026-01-17T00:31:29.078669895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd4f66f9c-79jbf,Uid:4cec6c0e-e80c-4688-94c8-dc0543670d3f,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:31:29.086228 systemd[1]: run-netns-cni\x2dbc9c052d\x2d4faf\x2df8a4\x2d990a\x2d29ba8bd2ce09.mount: Deactivated successfully. Jan 17 00:31:29.086737 systemd[1]: run-netns-cni\x2dffc607b5\x2d5ba0\x2d396d\x2d61e1\x2d3ad48f00b064.mount: Deactivated successfully. Jan 17 00:31:29.228770 kubelet[3406]: I0117 00:31:29.225119 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gkzjm" podStartSLOduration=56.225087623 podStartE2EDuration="56.225087623s" podCreationTimestamp="2026-01-17 00:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:31:29.222970371 +0000 UTC m=+62.512324390" watchObservedRunningTime="2026-01-17 00:31:29.225087623 +0000 UTC m=+62.514441642" Jan 17 00:31:29.237166 kubelet[3406]: E0117 00:31:29.227010 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:31:29.271231 kubelet[3406]: I0117 00:31:29.269492 3406 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dq7hz" podStartSLOduration=56.269459727 podStartE2EDuration="56.269459727s" podCreationTimestamp="2026-01-17 00:30:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:31:29.264190996 +0000 UTC m=+62.553545115" watchObservedRunningTime="2026-01-17 00:31:29.269459727 +0000 UTC m=+62.558813846" Jan 17 00:31:29.410408 systemd-networkd[1399]: calief297db3db4: Link UP Jan 17 00:31:29.413373 systemd-networkd[1399]: calief297db3db4: Gained carrier Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.177 [INFO][5368] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0 goldmane-666569f655- calico-system 086626e6-23d7-433b-8fe2-380f0110d591 978 0 2026-01-17 00:30:50 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-2e1a0c4804 goldmane-666569f655-jt8r9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calief297db3db4 [] [] }} ContainerID="2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" Namespace="calico-system" Pod="goldmane-666569f655-jt8r9" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.178 [INFO][5368] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" Namespace="calico-system" Pod="goldmane-666569f655-jt8r9" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.335 [INFO][5390] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" HandleID="k8s-pod-network.2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.335 [INFO][5390] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" HandleID="k8s-pod-network.2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103170), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2e1a0c4804", "pod":"goldmane-666569f655-jt8r9", "timestamp":"2026-01-17 00:31:29.335314464 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2e1a0c4804", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.335 [INFO][5390] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.335 [INFO][5390] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.335 [INFO][5390] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2e1a0c4804' Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.348 [INFO][5390] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.355 [INFO][5390] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.362 [INFO][5390] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.366 [INFO][5390] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.370 [INFO][5390] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.370 [INFO][5390] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.372 [INFO][5390] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415 Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.383 [INFO][5390] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.393 [INFO][5390] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.197/26] block=192.168.121.192/26 handle="k8s-pod-network.2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.394 [INFO][5390] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.197/26] handle="k8s-pod-network.2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.394 [INFO][5390] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:29.441382 containerd[1828]: 2026-01-17 00:31:29.395 [INFO][5390] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.197/26] IPv6=[] ContainerID="2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" HandleID="k8s-pod-network.2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:31:29.442372 containerd[1828]: 2026-01-17 00:31:29.398 [INFO][5368] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" Namespace="calico-system" Pod="goldmane-666569f655-jt8r9" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"086626e6-23d7-433b-8fe2-380f0110d591", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"", Pod:"goldmane-666569f655-jt8r9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calief297db3db4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:29.442372 containerd[1828]: 2026-01-17 00:31:29.403 [INFO][5368] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.197/32] ContainerID="2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" Namespace="calico-system" Pod="goldmane-666569f655-jt8r9" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:31:29.442372 containerd[1828]: 2026-01-17 00:31:29.403 [INFO][5368] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief297db3db4 ContainerID="2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" Namespace="calico-system" Pod="goldmane-666569f655-jt8r9" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:31:29.442372 containerd[1828]: 2026-01-17 00:31:29.412 [INFO][5368] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" Namespace="calico-system" Pod="goldmane-666569f655-jt8r9" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:31:29.442372 containerd[1828]: 2026-01-17 00:31:29.416 [INFO][5368] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" Namespace="calico-system" Pod="goldmane-666569f655-jt8r9" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"086626e6-23d7-433b-8fe2-380f0110d591", ResourceVersion:"978", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415", Pod:"goldmane-666569f655-jt8r9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calief297db3db4", MAC:"1e:94:9b:e8:e9:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:29.442372 containerd[1828]: 2026-01-17 00:31:29.438 [INFO][5368] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415" Namespace="calico-system" Pod="goldmane-666569f655-jt8r9" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:31:29.496938 containerd[1828]: time="2026-01-17T00:31:29.495856655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:31:29.496938 containerd[1828]: time="2026-01-17T00:31:29.495954158Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:31:29.496938 containerd[1828]: time="2026-01-17T00:31:29.495969158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:29.496938 containerd[1828]: time="2026-01-17T00:31:29.496265365Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:29.527626 systemd-networkd[1399]: cali154bb1c0825: Link UP Jan 17 00:31:29.530991 systemd-networkd[1399]: cali154bb1c0825: Gained carrier Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.285 [INFO][5377] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0 calico-apiserver-7bd4f66f9c- calico-apiserver 4cec6c0e-e80c-4688-94c8-dc0543670d3f 979 0 2026-01-17 00:30:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bd4f66f9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-2e1a0c4804 calico-apiserver-7bd4f66f9c-79jbf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali154bb1c0825 [] [] }} ContainerID="419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-79jbf" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.285 [INFO][5377] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-79jbf" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.383 [INFO][5401] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" HandleID="k8s-pod-network.419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.384 [INFO][5401] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" HandleID="k8s-pod-network.419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f250), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-2e1a0c4804", "pod":"calico-apiserver-7bd4f66f9c-79jbf", "timestamp":"2026-01-17 00:31:29.383731768 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2e1a0c4804", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.384 [INFO][5401] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.395 [INFO][5401] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.395 [INFO][5401] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2e1a0c4804' Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.453 [INFO][5401] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.462 [INFO][5401] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.472 [INFO][5401] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.476 [INFO][5401] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.482 [INFO][5401] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.483 [INFO][5401] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.486 [INFO][5401] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.494 [INFO][5401] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.510 [INFO][5401] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.198/26] block=192.168.121.192/26 handle="k8s-pod-network.419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.510 [INFO][5401] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.198/26] handle="k8s-pod-network.419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.511 [INFO][5401] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:29.557117 containerd[1828]: 2026-01-17 00:31:29.511 [INFO][5401] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.198/26] IPv6=[] ContainerID="419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" HandleID="k8s-pod-network.419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:31:29.560319 containerd[1828]: 2026-01-17 00:31:29.515 [INFO][5377] cni-plugin/k8s.go 418: Populated endpoint ContainerID="419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-79jbf" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0", GenerateName:"calico-apiserver-7bd4f66f9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4cec6c0e-e80c-4688-94c8-dc0543670d3f", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd4f66f9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"", Pod:"calico-apiserver-7bd4f66f9c-79jbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali154bb1c0825", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:29.560319 containerd[1828]: 2026-01-17 00:31:29.515 [INFO][5377] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.198/32] ContainerID="419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-79jbf" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:31:29.560319 containerd[1828]: 2026-01-17 00:31:29.515 [INFO][5377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali154bb1c0825 ContainerID="419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-79jbf" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:31:29.560319 containerd[1828]: 2026-01-17 00:31:29.531 [INFO][5377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-79jbf" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:31:29.560319 containerd[1828]: 2026-01-17 00:31:29.532 [INFO][5377] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-79jbf" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0", GenerateName:"calico-apiserver-7bd4f66f9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4cec6c0e-e80c-4688-94c8-dc0543670d3f", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd4f66f9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc", Pod:"calico-apiserver-7bd4f66f9c-79jbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali154bb1c0825", MAC:"26:67:36:41:df:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:29.560319 containerd[1828]: 2026-01-17 00:31:29.552 [INFO][5377] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-79jbf" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:31:29.612407 containerd[1828]: time="2026-01-17T00:31:29.612089976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:31:29.614159 containerd[1828]: time="2026-01-17T00:31:29.613767719Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:31:29.614159 containerd[1828]: time="2026-01-17T00:31:29.613860721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:29.615122 containerd[1828]: time="2026-01-17T00:31:29.614690742Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:29.615709 containerd[1828]: time="2026-01-17T00:31:29.615578065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jt8r9,Uid:086626e6-23d7-433b-8fe2-380f0110d591,Namespace:calico-system,Attempt:1,} returns sandbox id \"2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415\"" Jan 17 00:31:29.619906 containerd[1828]: time="2026-01-17T00:31:29.619850274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:31:29.699554 containerd[1828]: time="2026-01-17T00:31:29.699498214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd4f66f9c-79jbf,Uid:4cec6c0e-e80c-4688-94c8-dc0543670d3f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc\"" Jan 17 00:31:29.781976 systemd-networkd[1399]: calie771dcbaced: Gained IPv6LL Jan 17 00:31:29.782422 systemd-networkd[1399]: cali86f12d95b19: Gained IPv6LL Jan 17 00:31:29.831173 containerd[1828]: time="2026-01-17T00:31:29.831061384Z" level=info msg="StopPodSandbox for \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\"" Jan 17 00:31:29.833815 containerd[1828]: time="2026-01-17T00:31:29.833538647Z" level=info msg="StopPodSandbox for \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\"" Jan 17 00:31:29.889002 containerd[1828]: time="2026-01-17T00:31:29.888945066Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:29.893440 containerd[1828]: time="2026-01-17T00:31:29.893370680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:31:29.893973 containerd[1828]: time="2026-01-17T00:31:29.893406181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:31:29.894048 kubelet[3406]: E0117 00:31:29.893725 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:31:29.894048 kubelet[3406]: E0117 00:31:29.893817 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:31:29.897149 kubelet[3406]: E0117 00:31:29.894231 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9cw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jt8r9_calico-system(086626e6-23d7-433b-8fe2-380f0110d591): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:29.897149 kubelet[3406]: E0117 00:31:29.895581 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:31:29.899557 containerd[1828]: time="2026-01-17T00:31:29.899513037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.925 [INFO][5531] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.925 [INFO][5531] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" iface="eth0" netns="/var/run/netns/cni-1554d29b-48a4-f69a-ea5d-7433b602f6e4" Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.926 [INFO][5531] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" iface="eth0" netns="/var/run/netns/cni-1554d29b-48a4-f69a-ea5d-7433b602f6e4" Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.926 [INFO][5531] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" iface="eth0" netns="/var/run/netns/cni-1554d29b-48a4-f69a-ea5d-7433b602f6e4" Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.926 [INFO][5531] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.926 [INFO][5531] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.987 [INFO][5545] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" HandleID="k8s-pod-network.5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.987 [INFO][5545] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.988 [INFO][5545] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.997 [WARNING][5545] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" HandleID="k8s-pod-network.5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.997 [INFO][5545] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" HandleID="k8s-pod-network.5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:29.999 [INFO][5545] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:30.005082 containerd[1828]: 2026-01-17 00:31:30.001 [INFO][5531] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:31:30.007361 containerd[1828]: time="2026-01-17T00:31:30.007175994Z" level=info msg="TearDown network for sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\" successfully" Jan 17 00:31:30.007361 containerd[1828]: time="2026-01-17T00:31:30.007265897Z" level=info msg="StopPodSandbox for \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\" returns successfully" Jan 17 00:31:30.010328 containerd[1828]: time="2026-01-17T00:31:30.010110270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd4f66f9c-4tl94,Uid:a5246904-0f9d-4a5a-ba58-a0d97b0128df,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:29.924 [INFO][5532] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:29.926 [INFO][5532] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" iface="eth0" netns="/var/run/netns/cni-0ab60831-77b2-587a-8c46-c6b9f3990c08" Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:29.927 [INFO][5532] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" iface="eth0" netns="/var/run/netns/cni-0ab60831-77b2-587a-8c46-c6b9f3990c08" Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:29.943 [INFO][5532] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" iface="eth0" netns="/var/run/netns/cni-0ab60831-77b2-587a-8c46-c6b9f3990c08" Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:29.943 [INFO][5532] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:29.943 [INFO][5532] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:30.002 [INFO][5550] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" HandleID="k8s-pod-network.558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:30.002 [INFO][5550] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:30.002 [INFO][5550] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:30.011 [WARNING][5550] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" HandleID="k8s-pod-network.558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:30.011 [INFO][5550] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" HandleID="k8s-pod-network.558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:30.013 [INFO][5550] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:30.016718 containerd[1828]: 2026-01-17 00:31:30.015 [INFO][5532] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:31:30.017894 containerd[1828]: time="2026-01-17T00:31:30.016937945Z" level=info msg="TearDown network for sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\" successfully" Jan 17 00:31:30.017894 containerd[1828]: time="2026-01-17T00:31:30.016973845Z" level=info msg="StopPodSandbox for \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\" returns successfully" Jan 17 00:31:30.019074 containerd[1828]: time="2026-01-17T00:31:30.018584287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fddb47c6b-xwhmv,Uid:f248d2c0-f221-4bde-8ea2-75ac2344f18d,Namespace:calico-system,Attempt:1,}" Jan 17 00:31:30.105119 systemd[1]: run-netns-cni\x2d1554d29b\x2d48a4\x2df69a\x2dea5d\x2d7433b602f6e4.mount: Deactivated successfully. Jan 17 00:31:30.105364 systemd[1]: run-netns-cni\x2d0ab60831\x2d77b2\x2d587a\x2d8c46\x2dc6b9f3990c08.mount: Deactivated successfully. Jan 17 00:31:30.189769 containerd[1828]: time="2026-01-17T00:31:30.187989925Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:30.191377 containerd[1828]: time="2026-01-17T00:31:30.191309811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:31:30.191532 containerd[1828]: time="2026-01-17T00:31:30.191334411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:31:30.191838 kubelet[3406]: E0117 00:31:30.191736 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:30.191947 kubelet[3406]: E0117 00:31:30.191865 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:30.192788 kubelet[3406]: E0117 00:31:30.192160 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prbwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bd4f66f9c-79jbf_calico-apiserver(4cec6c0e-e80c-4688-94c8-dc0543670d3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:30.194281 kubelet[3406]: E0117 00:31:30.193713 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:31:30.217064 kubelet[3406]: E0117 00:31:30.217012 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:31:30.222796 kubelet[3406]: E0117 00:31:30.222016 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:31:30.300097 systemd-networkd[1399]: calia34d620a8d4: Link UP Jan 17 00:31:30.304564 systemd-networkd[1399]: calia34d620a8d4: Gained carrier Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.132 [INFO][5559] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0 calico-apiserver-7bd4f66f9c- calico-apiserver a5246904-0f9d-4a5a-ba58-a0d97b0128df 1008 0 2026-01-17 00:30:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bd4f66f9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-2e1a0c4804 calico-apiserver-7bd4f66f9c-4tl94 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia34d620a8d4 [] [] }} ContainerID="b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-4tl94" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.133 [INFO][5559] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-4tl94" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.183 [INFO][5584] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" HandleID="k8s-pod-network.b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.184 [INFO][5584] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" HandleID="k8s-pod-network.b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-2e1a0c4804", "pod":"calico-apiserver-7bd4f66f9c-4tl94", "timestamp":"2026-01-17 00:31:30.183598513 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2e1a0c4804", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.185 [INFO][5584] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.185 [INFO][5584] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.185 [INFO][5584] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2e1a0c4804' Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.195 [INFO][5584] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.204 [INFO][5584] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.213 [INFO][5584] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.218 [INFO][5584] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.226 [INFO][5584] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.226 [INFO][5584] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.238 [INFO][5584] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.252 [INFO][5584] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.265 [INFO][5584] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.199/26] block=192.168.121.192/26 handle="k8s-pod-network.b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.266 [INFO][5584] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.199/26] handle="k8s-pod-network.b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.266 [INFO][5584] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:30.338561 containerd[1828]: 2026-01-17 00:31:30.266 [INFO][5584] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.199/26] IPv6=[] ContainerID="b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" HandleID="k8s-pod-network.b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:31:30.340121 containerd[1828]: 2026-01-17 00:31:30.280 [INFO][5559] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-4tl94" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0", GenerateName:"calico-apiserver-7bd4f66f9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5246904-0f9d-4a5a-ba58-a0d97b0128df", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd4f66f9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"", Pod:"calico-apiserver-7bd4f66f9c-4tl94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia34d620a8d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:30.340121 containerd[1828]: 2026-01-17 00:31:30.281 [INFO][5559] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.199/32] ContainerID="b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-4tl94" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:31:30.340121 containerd[1828]: 2026-01-17 00:31:30.281 [INFO][5559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia34d620a8d4 ContainerID="b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-4tl94" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:31:30.340121 containerd[1828]: 2026-01-17 00:31:30.306 [INFO][5559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-4tl94" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:31:30.340121 containerd[1828]: 2026-01-17 00:31:30.308 [INFO][5559] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-4tl94" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0", GenerateName:"calico-apiserver-7bd4f66f9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5246904-0f9d-4a5a-ba58-a0d97b0128df", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd4f66f9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa", Pod:"calico-apiserver-7bd4f66f9c-4tl94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia34d620a8d4", MAC:"6e:03:f1:5b:15:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:30.340121 containerd[1828]: 2026-01-17 00:31:30.333 [INFO][5559] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa" Namespace="calico-apiserver" Pod="calico-apiserver-7bd4f66f9c-4tl94" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:31:30.396280 systemd-networkd[1399]: cali22e03b02eab: Link UP Jan 17 00:31:30.398061 containerd[1828]: time="2026-01-17T00:31:30.396203458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:31:30.398061 containerd[1828]: time="2026-01-17T00:31:30.396316761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:31:30.398061 containerd[1828]: time="2026-01-17T00:31:30.396337062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:30.398170 systemd-networkd[1399]: cali22e03b02eab: Gained carrier Jan 17 00:31:30.400010 containerd[1828]: time="2026-01-17T00:31:30.399812951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.151 [INFO][5569] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0 calico-kube-controllers-7fddb47c6b- calico-system f248d2c0-f221-4bde-8ea2-75ac2344f18d 1007 0 2026-01-17 00:30:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7fddb47c6b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-2e1a0c4804 calico-kube-controllers-7fddb47c6b-xwhmv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali22e03b02eab [] [] }} ContainerID="28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" Namespace="calico-system" Pod="calico-kube-controllers-7fddb47c6b-xwhmv" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.151 [INFO][5569] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" Namespace="calico-system" Pod="calico-kube-controllers-7fddb47c6b-xwhmv" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.208 [INFO][5590] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" HandleID="k8s-pod-network.28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.209 [INFO][5590] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" HandleID="k8s-pod-network.28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d50a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-2e1a0c4804", "pod":"calico-kube-controllers-7fddb47c6b-xwhmv", "timestamp":"2026-01-17 00:31:30.20886226 +0000 UTC"}, Hostname:"ci-4081.3.6-n-2e1a0c4804", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.209 [INFO][5590] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.276 [INFO][5590] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.276 [INFO][5590] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-2e1a0c4804' Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.297 [INFO][5590] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.312 [INFO][5590] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.319 [INFO][5590] ipam/ipam.go 511: Trying affinity for 192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.324 [INFO][5590] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.330 [INFO][5590] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.192/26 host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.330 [INFO][5590] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.192/26 handle="k8s-pod-network.28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.336 [INFO][5590] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.350 [INFO][5590] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.192/26 handle="k8s-pod-network.28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.375 [INFO][5590] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.200/26] block=192.168.121.192/26 handle="k8s-pod-network.28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.379 [INFO][5590] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.200/26] handle="k8s-pod-network.28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" host="ci-4081.3.6-n-2e1a0c4804" Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.379 [INFO][5590] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:31:30.463393 containerd[1828]: 2026-01-17 00:31:30.379 [INFO][5590] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.200/26] IPv6=[] ContainerID="28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" HandleID="k8s-pod-network.28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:31:30.464384 containerd[1828]: 2026-01-17 00:31:30.384 [INFO][5569] cni-plugin/k8s.go 418: Populated endpoint ContainerID="28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" Namespace="calico-system" Pod="calico-kube-controllers-7fddb47c6b-xwhmv" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0", GenerateName:"calico-kube-controllers-7fddb47c6b-", Namespace:"calico-system", SelfLink:"", UID:"f248d2c0-f221-4bde-8ea2-75ac2344f18d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fddb47c6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"", Pod:"calico-kube-controllers-7fddb47c6b-xwhmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali22e03b02eab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:30.464384 containerd[1828]: 2026-01-17 00:31:30.384 [INFO][5569] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.200/32] ContainerID="28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" Namespace="calico-system" Pod="calico-kube-controllers-7fddb47c6b-xwhmv" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:31:30.464384 containerd[1828]: 2026-01-17 00:31:30.384 [INFO][5569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22e03b02eab ContainerID="28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" Namespace="calico-system" Pod="calico-kube-controllers-7fddb47c6b-xwhmv" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:31:30.464384 containerd[1828]: 2026-01-17 00:31:30.401 [INFO][5569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" Namespace="calico-system" Pod="calico-kube-controllers-7fddb47c6b-xwhmv" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:31:30.464384 containerd[1828]: 2026-01-17 00:31:30.404 [INFO][5569] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" Namespace="calico-system" Pod="calico-kube-controllers-7fddb47c6b-xwhmv" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0", GenerateName:"calico-kube-controllers-7fddb47c6b-", Namespace:"calico-system", SelfLink:"", UID:"f248d2c0-f221-4bde-8ea2-75ac2344f18d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fddb47c6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f", Pod:"calico-kube-controllers-7fddb47c6b-xwhmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali22e03b02eab", MAC:"1a:8a:83:c4:6c:96", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:31:30.464384 containerd[1828]: 2026-01-17 00:31:30.451 [INFO][5569] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f" Namespace="calico-system" Pod="calico-kube-controllers-7fddb47c6b-xwhmv" WorkloadEndpoint="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:31:30.521260 containerd[1828]: time="2026-01-17T00:31:30.517871074Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:31:30.521260 containerd[1828]: time="2026-01-17T00:31:30.517954877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:31:30.521260 containerd[1828]: time="2026-01-17T00:31:30.518005778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:30.521260 containerd[1828]: time="2026-01-17T00:31:30.518206483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:31:30.557989 containerd[1828]: time="2026-01-17T00:31:30.557119680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bd4f66f9c-4tl94,Uid:a5246904-0f9d-4a5a-ba58-a0d97b0128df,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa\"" Jan 17 00:31:30.568861 containerd[1828]: time="2026-01-17T00:31:30.566433518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:31:30.647927 containerd[1828]: time="2026-01-17T00:31:30.647718200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7fddb47c6b-xwhmv,Uid:f248d2c0-f221-4bde-8ea2-75ac2344f18d,Namespace:calico-system,Attempt:1,} returns sandbox id \"28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f\"" Jan 17 00:31:30.742069 systemd-networkd[1399]: cali154bb1c0825: Gained IPv6LL Jan 17 00:31:30.828163 containerd[1828]: time="2026-01-17T00:31:30.827941416Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:30.831956 containerd[1828]: time="2026-01-17T00:31:30.831855416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:31:30.832341 containerd[1828]: time="2026-01-17T00:31:30.831853516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:31:30.833151 kubelet[3406]: E0117 00:31:30.832513 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:30.833151 kubelet[3406]: E0117 00:31:30.832584 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:30.833151 kubelet[3406]: E0117 00:31:30.832963 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sp24j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bd4f66f9c-4tl94_calico-apiserver(a5246904-0f9d-4a5a-ba58-a0d97b0128df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:30.835390 containerd[1828]: time="2026-01-17T00:31:30.833572160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:31:30.836429 kubelet[3406]: E0117 00:31:30.835707 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:31:31.062506 systemd-networkd[1399]: calief297db3db4: Gained IPv6LL Jan 17 00:31:31.081179 systemd[1]: run-containerd-runc-k8s.io-28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f-runc.RTcAuT.mount: Deactivated successfully. Jan 17 00:31:31.084267 containerd[1828]: time="2026-01-17T00:31:31.084211279Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:31.087845 containerd[1828]: time="2026-01-17T00:31:31.087700669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:31:31.088142 containerd[1828]: time="2026-01-17T00:31:31.087697769Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:31:31.088228 kubelet[3406]: E0117 00:31:31.088151 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:31:31.088295 kubelet[3406]: E0117 00:31:31.088231 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:31:31.088776 kubelet[3406]: E0117 00:31:31.088518 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrrhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fddb47c6b-xwhmv_calico-system(f248d2c0-f221-4bde-8ea2-75ac2344f18d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:31.089749 kubelet[3406]: E0117 00:31:31.089690 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:31:31.226726 kubelet[3406]: E0117 00:31:31.226295 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:31:31.232610 kubelet[3406]: E0117 00:31:31.232221 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:31:31.235004 kubelet[3406]: E0117 00:31:31.234223 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:31:31.235495 kubelet[3406]: E0117 00:31:31.235445 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:31:31.382331 systemd-networkd[1399]: calia34d620a8d4: Gained IPv6LL Jan 17 00:31:32.234103 kubelet[3406]: E0117 00:31:32.233858 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:31:32.234825 kubelet[3406]: E0117 00:31:32.234678 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:31:32.278983 systemd-networkd[1399]: cali22e03b02eab: Gained IPv6LL Jan 17 00:31:37.833631 containerd[1828]: time="2026-01-17T00:31:37.832720222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:31:38.082669 containerd[1828]: time="2026-01-17T00:31:38.082594364Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:38.086699 containerd[1828]: time="2026-01-17T00:31:38.086365356Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:31:38.086699 containerd[1828]: time="2026-01-17T00:31:38.086369156Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:31:38.086928 kubelet[3406]: E0117 00:31:38.086815 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:31:38.086928 kubelet[3406]: E0117 00:31:38.086891 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:31:38.088118 kubelet[3406]: E0117 00:31:38.087076 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b7d7352bb0c64a4eb1262e2afe0300e5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jgdct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8dc795d65-glbln_calico-system(0b534c0b-2a92-45dc-b919-720218923434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:38.090594 containerd[1828]: time="2026-01-17T00:31:38.090388174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:31:38.526690 containerd[1828]: time="2026-01-17T00:31:38.526614664Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:38.529826 containerd[1828]: time="2026-01-17T00:31:38.529682783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:31:38.529826 containerd[1828]: time="2026-01-17T00:31:38.529754390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:31:38.530092 kubelet[3406]: E0117 00:31:38.530033 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:31:38.530170 kubelet[3406]: E0117 00:31:38.530117 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:31:38.530351 kubelet[3406]: E0117 00:31:38.530301 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgdct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8dc795d65-glbln_calico-system(0b534c0b-2a92-45dc-b919-720218923434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:38.532054 kubelet[3406]: E0117 00:31:38.532002 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:31:41.832345 containerd[1828]: time="2026-01-17T00:31:41.832285394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:31:43.862406 containerd[1828]: time="2026-01-17T00:31:43.862319704Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:43.865950 containerd[1828]: time="2026-01-17T00:31:43.865792292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:31:43.865950 containerd[1828]: time="2026-01-17T00:31:43.865854093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:31:43.866261 kubelet[3406]: E0117 00:31:43.866176 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:31:43.866805 kubelet[3406]: E0117 00:31:43.866275 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:31:43.866805 kubelet[3406]: E0117 00:31:43.866768 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nq5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnm26_calico-system(a7052c5c-a862-4e62-a623-7782ea46a871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:43.867687 containerd[1828]: time="2026-01-17T00:31:43.867425433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:31:44.119560 containerd[1828]: time="2026-01-17T00:31:44.119365589Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:44.123080 containerd[1828]: time="2026-01-17T00:31:44.122687872Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:31:44.123080 containerd[1828]: time="2026-01-17T00:31:44.122716773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:31:44.123293 kubelet[3406]: E0117 00:31:44.123115 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:44.123293 kubelet[3406]: E0117 00:31:44.123194 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:44.124002 kubelet[3406]: E0117 00:31:44.123595 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prbwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bd4f66f9c-79jbf_calico-apiserver(4cec6c0e-e80c-4688-94c8-dc0543670d3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:44.124612 containerd[1828]: time="2026-01-17T00:31:44.123677397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:31:44.125707 kubelet[3406]: E0117 00:31:44.125661 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:31:44.366544 containerd[1828]: time="2026-01-17T00:31:44.366464622Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:44.369682 containerd[1828]: time="2026-01-17T00:31:44.369506199Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:31:44.369682 containerd[1828]: time="2026-01-17T00:31:44.369548100Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:31:44.370531 kubelet[3406]: E0117 00:31:44.369892 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:31:44.370531 kubelet[3406]: E0117 00:31:44.369975 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:31:44.370531 kubelet[3406]: E0117 00:31:44.370163 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nq5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnm26_calico-system(a7052c5c-a862-4e62-a623-7782ea46a871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:44.371991 kubelet[3406]: E0117 00:31:44.371933 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:31:44.832265 containerd[1828]: time="2026-01-17T00:31:44.831786260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:31:45.071821 containerd[1828]: time="2026-01-17T00:31:45.071715213Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:45.074529 containerd[1828]: time="2026-01-17T00:31:45.074469482Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:31:45.074690 containerd[1828]: time="2026-01-17T00:31:45.074491183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:31:45.074929 kubelet[3406]: E0117 00:31:45.074878 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:31:45.075413 kubelet[3406]: E0117 00:31:45.074957 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:31:45.075474 kubelet[3406]: E0117 00:31:45.075383 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9cw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jt8r9_calico-system(086626e6-23d7-433b-8fe2-380f0110d591): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:45.076386 containerd[1828]: time="2026-01-17T00:31:45.076322329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:31:45.077615 kubelet[3406]: E0117 00:31:45.076887 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:31:45.357570 containerd[1828]: time="2026-01-17T00:31:45.357503922Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:45.361303 containerd[1828]: time="2026-01-17T00:31:45.361221816Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:31:45.361489 containerd[1828]: time="2026-01-17T00:31:45.361227316Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:31:45.361659 kubelet[3406]: E0117 00:31:45.361602 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:31:45.361788 kubelet[3406]: E0117 00:31:45.361679 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:31:45.361983 kubelet[3406]: E0117 00:31:45.361929 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrrhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fddb47c6b-xwhmv_calico-system(f248d2c0-f221-4bde-8ea2-75ac2344f18d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:45.363686 kubelet[3406]: E0117 00:31:45.363633 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:31:47.834353 containerd[1828]: time="2026-01-17T00:31:47.833802815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:31:48.088305 containerd[1828]: time="2026-01-17T00:31:48.088078696Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:48.091716 containerd[1828]: time="2026-01-17T00:31:48.091562403Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:31:48.091716 containerd[1828]: time="2026-01-17T00:31:48.091645405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:31:48.092086 kubelet[3406]: E0117 00:31:48.091914 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:48.092086 kubelet[3406]: E0117 00:31:48.092002 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:48.092901 kubelet[3406]: E0117 00:31:48.092219 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sp24j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bd4f66f9c-4tl94_calico-apiserver(a5246904-0f9d-4a5a-ba58-a0d97b0128df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:48.093480 kubelet[3406]: E0117 00:31:48.093427 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:31:51.834988 kubelet[3406]: E0117 00:31:51.834032 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:31:56.836967 kubelet[3406]: E0117 00:31:56.836809 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:31:57.831998 kubelet[3406]: E0117 00:31:57.831481 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:31:58.832930 kubelet[3406]: E0117 00:31:58.832848 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:31:58.837150 kubelet[3406]: E0117 00:31:58.836824 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:31:59.835774 kubelet[3406]: E0117 00:31:59.834264 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:32:05.359025 waagent[2068]: 2026-01-17T00:32:05.358511Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 17 00:32:05.368774 waagent[2068]: 2026-01-17T00:32:05.367560Z INFO ExtHandler Jan 17 00:32:05.368774 waagent[2068]: 2026-01-17T00:32:05.367806Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 60053e31-3881-4583-8399-4eeeb4dc1693 eTag: 5941405282189673666 source: Fabric] Jan 17 00:32:05.368774 waagent[2068]: 2026-01-17T00:32:05.368383Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:32:05.369423 waagent[2068]: 2026-01-17T00:32:05.369354Z INFO ExtHandler Jan 17 00:32:05.369544 waagent[2068]: 2026-01-17T00:32:05.369475Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 17 00:32:05.467860 waagent[2068]: 2026-01-17T00:32:05.467403Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:32:05.660783 waagent[2068]: 2026-01-17T00:32:05.657287Z INFO ExtHandler Downloaded certificate {'thumbprint': '8B9F0E645812564ACBB1269663BBAA74A547CD61', 'hasPrivateKey': True} Jan 17 00:32:05.660783 waagent[2068]: 2026-01-17T00:32:05.659895Z INFO ExtHandler Fetch goal state completed Jan 17 00:32:05.660783 waagent[2068]: 2026-01-17T00:32:05.660626Z INFO ExtHandler ExtHandler Jan 17 00:32:05.660783 waagent[2068]: 2026-01-17T00:32:05.660766Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 04373d94-1823-4975-873b-f660b4bdd0e2 correlation a927296e-74da-4725-b624-4bb9c97db8d0 created: 2026-01-17T00:31:55.752754Z] Jan 17 00:32:05.661289 waagent[2068]: 2026-01-17T00:32:05.661231Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:32:05.664471 waagent[2068]: 2026-01-17T00:32:05.664365Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 3 ms] Jan 17 00:32:06.834868 containerd[1828]: time="2026-01-17T00:32:06.833818613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:32:07.093941 containerd[1828]: time="2026-01-17T00:32:07.093511565Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:07.099134 containerd[1828]: time="2026-01-17T00:32:07.098872041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:32:07.099134 containerd[1828]: time="2026-01-17T00:32:07.099056947Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:32:07.101814 kubelet[3406]: E0117 00:32:07.099943 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:32:07.101814 kubelet[3406]: E0117 00:32:07.100028 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:32:07.101814 kubelet[3406]: E0117 00:32:07.100198 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b7d7352bb0c64a4eb1262e2afe0300e5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jgdct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8dc795d65-glbln_calico-system(0b534c0b-2a92-45dc-b919-720218923434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:07.108206 containerd[1828]: time="2026-01-17T00:32:07.107475125Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:32:07.376166 containerd[1828]: time="2026-01-17T00:32:07.375970166Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:07.379312 containerd[1828]: time="2026-01-17T00:32:07.379060768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:32:07.379312 containerd[1828]: time="2026-01-17T00:32:07.379229573Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:32:07.380639 kubelet[3406]: E0117 00:32:07.379776 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:32:07.380639 kubelet[3406]: E0117 00:32:07.379855 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:32:07.380639 kubelet[3406]: E0117 00:32:07.380038 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgdct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8dc795d65-glbln_calico-system(0b534c0b-2a92-45dc-b919-720218923434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:07.381799 kubelet[3406]: E0117 00:32:07.381722 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:32:08.840700 containerd[1828]: time="2026-01-17T00:32:08.840637896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:32:09.111131 containerd[1828]: time="2026-01-17T00:32:09.110683588Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:09.113774 containerd[1828]: time="2026-01-17T00:32:09.113551483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:32:09.113774 containerd[1828]: time="2026-01-17T00:32:09.113687887Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:32:09.116785 kubelet[3406]: E0117 00:32:09.115931 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:32:09.116785 kubelet[3406]: E0117 00:32:09.116019 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:32:09.116785 kubelet[3406]: E0117 00:32:09.116450 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9cw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jt8r9_calico-system(086626e6-23d7-433b-8fe2-380f0110d591): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:09.120470 containerd[1828]: time="2026-01-17T00:32:09.120120699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:32:09.121105 kubelet[3406]: E0117 00:32:09.120816 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:32:09.367399 containerd[1828]: time="2026-01-17T00:32:09.367210035Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:09.370487 containerd[1828]: time="2026-01-17T00:32:09.370412841Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:32:09.370641 containerd[1828]: time="2026-01-17T00:32:09.370558846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:32:09.371783 kubelet[3406]: E0117 00:32:09.370836 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:32:09.371783 kubelet[3406]: E0117 00:32:09.370913 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:32:09.371783 kubelet[3406]: E0117 00:32:09.371097 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nq5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnm26_calico-system(a7052c5c-a862-4e62-a623-7782ea46a871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:09.374751 containerd[1828]: time="2026-01-17T00:32:09.374699782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:32:09.683043 containerd[1828]: time="2026-01-17T00:32:09.682924051Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:09.687445 containerd[1828]: time="2026-01-17T00:32:09.687222678Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:32:09.687445 containerd[1828]: time="2026-01-17T00:32:09.687318881Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:32:09.688066 kubelet[3406]: E0117 00:32:09.687959 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:32:09.688183 kubelet[3406]: E0117 00:32:09.688150 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:32:09.688421 kubelet[3406]: E0117 00:32:09.688371 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nq5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnm26_calico-system(a7052c5c-a862-4e62-a623-7782ea46a871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:09.690384 kubelet[3406]: E0117 00:32:09.690296 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:32:10.844581 containerd[1828]: time="2026-01-17T00:32:10.844174315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:32:11.100906 containerd[1828]: time="2026-01-17T00:32:11.098926153Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:11.105022 containerd[1828]: time="2026-01-17T00:32:11.104906730Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:32:11.105380 containerd[1828]: time="2026-01-17T00:32:11.104946832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:32:11.105989 kubelet[3406]: E0117 00:32:11.105706 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:32:11.105989 kubelet[3406]: E0117 00:32:11.105806 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:32:11.107516 kubelet[3406]: E0117 00:32:11.106531 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrrhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fddb47c6b-xwhmv_calico-system(f248d2c0-f221-4bde-8ea2-75ac2344f18d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:11.107865 kubelet[3406]: E0117 00:32:11.107791 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:32:13.833736 containerd[1828]: time="2026-01-17T00:32:13.833679280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:32:14.091034 containerd[1828]: time="2026-01-17T00:32:14.089203542Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:14.092573 containerd[1828]: time="2026-01-17T00:32:14.092229131Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:32:14.092573 containerd[1828]: time="2026-01-17T00:32:14.092368736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:32:14.093956 kubelet[3406]: E0117 00:32:14.092812 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:32:14.093956 kubelet[3406]: E0117 00:32:14.093896 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:32:14.097847 kubelet[3406]: E0117 00:32:14.094826 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prbwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bd4f66f9c-79jbf_calico-apiserver(4cec6c0e-e80c-4688-94c8-dc0543670d3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:14.098794 kubelet[3406]: E0117 00:32:14.098082 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:32:14.098947 containerd[1828]: time="2026-01-17T00:32:14.098591120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:32:14.362729 containerd[1828]: time="2026-01-17T00:32:14.362125618Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:14.368731 containerd[1828]: time="2026-01-17T00:32:14.367433175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:32:14.368731 containerd[1828]: time="2026-01-17T00:32:14.367492577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:32:14.369020 kubelet[3406]: E0117 00:32:14.368347 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:32:14.369020 kubelet[3406]: E0117 00:32:14.368437 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:32:14.369817 kubelet[3406]: E0117 00:32:14.369701 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sp24j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bd4f66f9c-4tl94_calico-apiserver(a5246904-0f9d-4a5a-ba58-a0d97b0128df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:14.370986 kubelet[3406]: E0117 00:32:14.370928 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:32:19.835999 kubelet[3406]: E0117 00:32:19.835935 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:32:20.016920 systemd[1]: Started sshd@7-10.200.8.33:22-10.200.16.10:45836.service - OpenSSH per-connection server daemon (10.200.16.10:45836). Jan 17 00:32:20.681371 sshd[5772]: Accepted publickey for core from 10.200.16.10 port 45836 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:32:20.684876 sshd[5772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:32:20.696416 systemd-logind[1811]: New session 10 of user core. Jan 17 00:32:20.701921 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:32:20.836192 kubelet[3406]: E0117 00:32:20.835990 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:32:21.332567 sshd[5772]: pam_unix(sshd:session): session closed for user core Jan 17 00:32:21.339700 systemd[1]: sshd@7-10.200.8.33:22-10.200.16.10:45836.service: Deactivated successfully. Jan 17 00:32:21.348313 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:32:21.355275 systemd-logind[1811]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:32:21.359601 systemd-logind[1811]: Removed session 10. Jan 17 00:32:23.836168 kubelet[3406]: E0117 00:32:23.836105 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:32:24.840771 kubelet[3406]: E0117 00:32:24.838078 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:32:25.835161 kubelet[3406]: E0117 00:32:25.835087 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:32:26.449572 systemd[1]: Started sshd@8-10.200.8.33:22-10.200.16.10:45852.service - OpenSSH per-connection server daemon (10.200.16.10:45852). Jan 17 00:32:27.131962 sshd[5814]: Accepted publickey for core from 10.200.16.10 port 45852 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:32:27.138459 sshd[5814]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:32:27.149445 systemd-logind[1811]: New session 11 of user core. Jan 17 00:32:27.157135 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:32:27.279699 containerd[1828]: time="2026-01-17T00:32:27.279411171Z" level=info msg="StopPodSandbox for \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\"" Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.355 [WARNING][5828] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"086626e6-23d7-433b-8fe2-380f0110d591", ResourceVersion:"1275", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415", Pod:"goldmane-666569f655-jt8r9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calief297db3db4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.355 [INFO][5828] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.355 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" iface="eth0" netns="" Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.355 [INFO][5828] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.356 [INFO][5828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.396 [INFO][5835] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" HandleID="k8s-pod-network.78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.397 [INFO][5835] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.397 [INFO][5835] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.405 [WARNING][5835] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" HandleID="k8s-pod-network.78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.405 [INFO][5835] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" HandleID="k8s-pod-network.78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.407 [INFO][5835] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:27.417161 containerd[1828]: 2026-01-17 00:32:27.410 [INFO][5828] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:32:27.417958 containerd[1828]: time="2026-01-17T00:32:27.417231891Z" level=info msg="TearDown network for sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\" successfully" Jan 17 00:32:27.417958 containerd[1828]: time="2026-01-17T00:32:27.417280292Z" level=info msg="StopPodSandbox for \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\" returns successfully" Jan 17 00:32:27.421413 containerd[1828]: time="2026-01-17T00:32:27.419590456Z" level=info msg="RemovePodSandbox for \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\"" Jan 17 00:32:27.421413 containerd[1828]: time="2026-01-17T00:32:27.421090198Z" level=info msg="Forcibly stopping sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\"" Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.504 [WARNING][5849] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"086626e6-23d7-433b-8fe2-380f0110d591", ResourceVersion:"1275", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"2a55d1989ef38ab2b48489f3fb5d914e507cc19aff03a90bcccaed7b1f775415", Pod:"goldmane-666569f655-jt8r9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calief297db3db4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.505 [INFO][5849] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.505 [INFO][5849] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" iface="eth0" netns="" Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.505 [INFO][5849] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.505 [INFO][5849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.587 [INFO][5864] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" HandleID="k8s-pod-network.78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.587 [INFO][5864] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.588 [INFO][5864] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.604 [WARNING][5864] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" HandleID="k8s-pod-network.78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.604 [INFO][5864] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" HandleID="k8s-pod-network.78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-goldmane--666569f655--jt8r9-eth0" Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.607 [INFO][5864] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:27.613968 containerd[1828]: 2026-01-17 00:32:27.609 [INFO][5849] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206" Jan 17 00:32:27.614699 containerd[1828]: time="2026-01-17T00:32:27.614060847Z" level=info msg="TearDown network for sandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\" successfully" Jan 17 00:32:27.630191 containerd[1828]: time="2026-01-17T00:32:27.630062691Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:32:27.630421 containerd[1828]: time="2026-01-17T00:32:27.630275397Z" level=info msg="RemovePodSandbox \"78ac2226402e6b3ed934fe303b164c4fcd9a6924aa03fd93e53ee6784f15e206\" returns successfully" Jan 17 00:32:27.631318 containerd[1828]: time="2026-01-17T00:32:27.631279624Z" level=info msg="StopPodSandbox for \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\"" Jan 17 00:32:27.773276 sshd[5814]: pam_unix(sshd:session): session closed for user core Jan 17 00:32:27.786230 systemd[1]: sshd@8-10.200.8.33:22-10.200.16.10:45852.service: Deactivated successfully. Jan 17 00:32:27.794220 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:32:27.794790 systemd-logind[1811]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:32:27.798817 systemd-logind[1811]: Removed session 11. Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.713 [WARNING][5880] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0", GenerateName:"calico-apiserver-7bd4f66f9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5246904-0f9d-4a5a-ba58-a0d97b0128df", ResourceVersion:"1223", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd4f66f9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa", Pod:"calico-apiserver-7bd4f66f9c-4tl94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia34d620a8d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.713 [INFO][5880] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.713 [INFO][5880] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" iface="eth0" netns="" Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.713 [INFO][5880] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.713 [INFO][5880] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.761 [INFO][5888] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" HandleID="k8s-pod-network.5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.762 [INFO][5888] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.762 [INFO][5888] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.780 [WARNING][5888] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" HandleID="k8s-pod-network.5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.780 [INFO][5888] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" HandleID="k8s-pod-network.5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.791 [INFO][5888] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:27.804498 containerd[1828]: 2026-01-17 00:32:27.801 [INFO][5880] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:32:27.806041 containerd[1828]: time="2026-01-17T00:32:27.805476453Z" level=info msg="TearDown network for sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\" successfully" Jan 17 00:32:27.806041 containerd[1828]: time="2026-01-17T00:32:27.805616857Z" level=info msg="StopPodSandbox for \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\" returns successfully" Jan 17 00:32:27.807026 containerd[1828]: time="2026-01-17T00:32:27.806364078Z" level=info msg="RemovePodSandbox for \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\"" Jan 17 00:32:27.807026 containerd[1828]: time="2026-01-17T00:32:27.806402679Z" level=info msg="Forcibly stopping sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\"" Jan 17 00:32:27.837863 kubelet[3406]: E0117 00:32:27.837509 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.883 [WARNING][5905] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0", GenerateName:"calico-apiserver-7bd4f66f9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5246904-0f9d-4a5a-ba58-a0d97b0128df", ResourceVersion:"1331", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd4f66f9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"b2a4ba2cd5e4f7358e8f165fc2b9088ce886bc277eac8ab0da8c2b3785fa42fa", Pod:"calico-apiserver-7bd4f66f9c-4tl94", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia34d620a8d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.884 [INFO][5905] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.884 [INFO][5905] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" iface="eth0" netns="" Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.885 [INFO][5905] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.885 [INFO][5905] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.949 [INFO][5912] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" HandleID="k8s-pod-network.5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.950 [INFO][5912] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.950 [INFO][5912] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.966 [WARNING][5912] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" HandleID="k8s-pod-network.5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.966 [INFO][5912] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" HandleID="k8s-pod-network.5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--4tl94-eth0" Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.974 [INFO][5912] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:27.984335 containerd[1828]: 2026-01-17 00:32:27.979 [INFO][5905] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d" Jan 17 00:32:27.984335 containerd[1828]: time="2026-01-17T00:32:27.983447486Z" level=info msg="TearDown network for sandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\" successfully" Jan 17 00:32:28.019927 containerd[1828]: time="2026-01-17T00:32:28.019376582Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:32:28.019927 containerd[1828]: time="2026-01-17T00:32:28.019492886Z" level=info msg="RemovePodSandbox \"5e7511da0a006cca33b838a7fc9c6ed328612c4bbae6312336a8beae35e4707d\" returns successfully" Jan 17 00:32:28.021572 containerd[1828]: time="2026-01-17T00:32:28.021286835Z" level=info msg="StopPodSandbox for \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\"" Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.155 [WARNING][5926] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"441e897e-7cad-49ae-85a1-babdbbc91ee3", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df", Pod:"coredns-668d6bf9bc-gkzjm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie771dcbaced", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.157 [INFO][5926] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.157 [INFO][5926] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" iface="eth0" netns="" Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.157 [INFO][5926] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.157 [INFO][5926] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.237 [INFO][5933] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" HandleID="k8s-pod-network.94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.238 [INFO][5933] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.238 [INFO][5933] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.246 [WARNING][5933] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" HandleID="k8s-pod-network.94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.246 [INFO][5933] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" HandleID="k8s-pod-network.94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.248 [INFO][5933] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:28.253818 containerd[1828]: 2026-01-17 00:32:28.251 [INFO][5926] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:32:28.256028 containerd[1828]: time="2026-01-17T00:32:28.253871683Z" level=info msg="TearDown network for sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\" successfully" Jan 17 00:32:28.256028 containerd[1828]: time="2026-01-17T00:32:28.253909884Z" level=info msg="StopPodSandbox for \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\" returns successfully" Jan 17 00:32:28.256028 containerd[1828]: time="2026-01-17T00:32:28.254703106Z" level=info msg="RemovePodSandbox for \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\"" Jan 17 00:32:28.256028 containerd[1828]: time="2026-01-17T00:32:28.254869410Z" level=info msg="Forcibly stopping sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\"" Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.315 [WARNING][5948] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"441e897e-7cad-49ae-85a1-babdbbc91ee3", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"dbabb9d5345548b3d0bd96e930416f5a7e46265d2d719d771374b98a907397df", Pod:"coredns-668d6bf9bc-gkzjm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie771dcbaced", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.318 [INFO][5948] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.318 [INFO][5948] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" iface="eth0" netns="" Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.318 [INFO][5948] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.318 [INFO][5948] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.361 [INFO][5955] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" HandleID="k8s-pod-network.94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.362 [INFO][5955] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.362 [INFO][5955] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.372 [WARNING][5955] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" HandleID="k8s-pod-network.94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.372 [INFO][5955] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" HandleID="k8s-pod-network.94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--gkzjm-eth0" Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.375 [INFO][5955] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:28.379660 containerd[1828]: 2026-01-17 00:32:28.377 [INFO][5948] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2" Jan 17 00:32:28.380778 containerd[1828]: time="2026-01-17T00:32:28.379731371Z" level=info msg="TearDown network for sandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\" successfully" Jan 17 00:32:28.428354 containerd[1828]: time="2026-01-17T00:32:28.428262617Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:32:28.428579 containerd[1828]: time="2026-01-17T00:32:28.428436922Z" level=info msg="RemovePodSandbox \"94b010ea6469b3f3ebf2ed88bc2bec5a8d1ef4fd4f7c13b75ed343d16e3ce2a2\" returns successfully" Jan 17 00:32:28.429482 containerd[1828]: time="2026-01-17T00:32:28.429087340Z" level=info msg="StopPodSandbox for \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\"" Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.486 [WARNING][5970] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7052c5c-a862-4e62-a623-7782ea46a871", ResourceVersion:"1311", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e", Pod:"csi-node-driver-bnm26", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b4e2d8d2f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.487 [INFO][5970] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.487 [INFO][5970] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" iface="eth0" netns="" Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.487 [INFO][5970] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.487 [INFO][5970] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.523 [INFO][5978] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" HandleID="k8s-pod-network.e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.524 [INFO][5978] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.524 [INFO][5978] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.536 [WARNING][5978] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" HandleID="k8s-pod-network.e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.537 [INFO][5978] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" HandleID="k8s-pod-network.e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.539 [INFO][5978] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:28.543001 containerd[1828]: 2026-01-17 00:32:28.541 [INFO][5970] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:32:28.543001 containerd[1828]: time="2026-01-17T00:32:28.542954796Z" level=info msg="TearDown network for sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\" successfully" Jan 17 00:32:28.543001 containerd[1828]: time="2026-01-17T00:32:28.542994897Z" level=info msg="StopPodSandbox for \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\" returns successfully" Jan 17 00:32:28.544484 containerd[1828]: time="2026-01-17T00:32:28.544447537Z" level=info msg="RemovePodSandbox for \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\"" Jan 17 00:32:28.544612 containerd[1828]: time="2026-01-17T00:32:28.544495839Z" level=info msg="Forcibly stopping sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\"" Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.603 [WARNING][5992] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a7052c5c-a862-4e62-a623-7782ea46a871", ResourceVersion:"1311", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"5f7e10d7656a3650582e610ea35d9aab67a4c201138f1d10abf3044e74fc231e", Pod:"csi-node-driver-bnm26", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5b4e2d8d2f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.603 [INFO][5992] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.603 [INFO][5992] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" iface="eth0" netns="" Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.603 [INFO][5992] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.603 [INFO][5992] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.649 [INFO][6000] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" HandleID="k8s-pod-network.e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.649 [INFO][6000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.649 [INFO][6000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.656 [WARNING][6000] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" HandleID="k8s-pod-network.e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.656 [INFO][6000] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" HandleID="k8s-pod-network.e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-csi--node--driver--bnm26-eth0" Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.658 [INFO][6000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:28.661977 containerd[1828]: 2026-01-17 00:32:28.660 [INFO][5992] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8" Jan 17 00:32:28.662706 containerd[1828]: time="2026-01-17T00:32:28.662059197Z" level=info msg="TearDown network for sandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\" successfully" Jan 17 00:32:28.730173 containerd[1828]: time="2026-01-17T00:32:28.730080283Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:32:28.730414 containerd[1828]: time="2026-01-17T00:32:28.730226187Z" level=info msg="RemovePodSandbox \"e8c490af9091e7fba295a7e8fb35688f3888b5c49d5750742013acaa83be66d8\" returns successfully" Jan 17 00:32:28.732512 containerd[1828]: time="2026-01-17T00:32:28.732015037Z" level=info msg="StopPodSandbox for \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\"" Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:28.896 [WARNING][6014] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0", GenerateName:"calico-kube-controllers-7fddb47c6b-", Namespace:"calico-system", SelfLink:"", UID:"f248d2c0-f221-4bde-8ea2-75ac2344f18d", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fddb47c6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f", Pod:"calico-kube-controllers-7fddb47c6b-xwhmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali22e03b02eab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:28.896 [INFO][6014] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:28.896 [INFO][6014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" iface="eth0" netns="" Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:28.896 [INFO][6014] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:28.896 [INFO][6014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:28.994 [INFO][6021] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" HandleID="k8s-pod-network.558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:28.994 [INFO][6021] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:28.994 [INFO][6021] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:29.007 [WARNING][6021] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" HandleID="k8s-pod-network.558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:29.007 [INFO][6021] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" HandleID="k8s-pod-network.558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:29.009 [INFO][6021] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:29.015736 containerd[1828]: 2026-01-17 00:32:29.011 [INFO][6014] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:32:29.016553 containerd[1828]: time="2026-01-17T00:32:29.015904406Z" level=info msg="TearDown network for sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\" successfully" Jan 17 00:32:29.016553 containerd[1828]: time="2026-01-17T00:32:29.015944307Z" level=info msg="StopPodSandbox for \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\" returns successfully" Jan 17 00:32:29.019155 containerd[1828]: time="2026-01-17T00:32:29.017053038Z" level=info msg="RemovePodSandbox for \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\"" Jan 17 00:32:29.019155 containerd[1828]: time="2026-01-17T00:32:29.017104739Z" level=info msg="Forcibly stopping sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\"" Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.086 [WARNING][6035] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0", GenerateName:"calico-kube-controllers-7fddb47c6b-", Namespace:"calico-system", SelfLink:"", UID:"f248d2c0-f221-4bde-8ea2-75ac2344f18d", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7fddb47c6b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"28e829585048bc008d45e0aecfca962c7662f089f856df96cc7e9782b01c183f", Pod:"calico-kube-controllers-7fddb47c6b-xwhmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali22e03b02eab", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.087 [INFO][6035] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.087 [INFO][6035] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" iface="eth0" netns="" Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.087 [INFO][6035] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.087 [INFO][6035] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.125 [INFO][6043] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" HandleID="k8s-pod-network.558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.126 [INFO][6043] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.126 [INFO][6043] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.134 [WARNING][6043] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" HandleID="k8s-pod-network.558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.134 [INFO][6043] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" HandleID="k8s-pod-network.558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--kube--controllers--7fddb47c6b--xwhmv-eth0" Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.136 [INFO][6043] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:29.140066 containerd[1828]: 2026-01-17 00:32:29.138 [INFO][6035] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe" Jan 17 00:32:29.140975 containerd[1828]: time="2026-01-17T00:32:29.140113149Z" level=info msg="TearDown network for sandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\" successfully" Jan 17 00:32:29.173097 containerd[1828]: time="2026-01-17T00:32:29.172948559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:32:29.173097 containerd[1828]: time="2026-01-17T00:32:29.173075663Z" level=info msg="RemovePodSandbox \"558e2dab7ccbcdb412b532e04448696c48fd42facea7d5cd4a1f3ff17b7232fe\" returns successfully" Jan 17 00:32:29.176395 containerd[1828]: time="2026-01-17T00:32:29.176296452Z" level=info msg="StopPodSandbox for \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\"" Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.245 [WARNING][6057] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0", GenerateName:"calico-apiserver-7bd4f66f9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4cec6c0e-e80c-4688-94c8-dc0543670d3f", ResourceVersion:"1319", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd4f66f9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc", Pod:"calico-apiserver-7bd4f66f9c-79jbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali154bb1c0825", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.246 [INFO][6057] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.246 [INFO][6057] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" iface="eth0" netns="" Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.246 [INFO][6057] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.246 [INFO][6057] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.284 [INFO][6064] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" HandleID="k8s-pod-network.29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.285 [INFO][6064] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.286 [INFO][6064] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.296 [WARNING][6064] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" HandleID="k8s-pod-network.29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.296 [INFO][6064] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" HandleID="k8s-pod-network.29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.299 [INFO][6064] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:29.304022 containerd[1828]: 2026-01-17 00:32:29.301 [INFO][6057] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:32:29.307099 containerd[1828]: time="2026-01-17T00:32:29.305473133Z" level=info msg="TearDown network for sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\" successfully" Jan 17 00:32:29.307099 containerd[1828]: time="2026-01-17T00:32:29.305523734Z" level=info msg="StopPodSandbox for \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\" returns successfully" Jan 17 00:32:29.309825 containerd[1828]: time="2026-01-17T00:32:29.308956829Z" level=info msg="RemovePodSandbox for \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\"" Jan 17 00:32:29.309825 containerd[1828]: time="2026-01-17T00:32:29.309008031Z" level=info msg="Forcibly stopping sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\"" Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.370 [WARNING][6078] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0", GenerateName:"calico-apiserver-7bd4f66f9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"4cec6c0e-e80c-4688-94c8-dc0543670d3f", ResourceVersion:"1319", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bd4f66f9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"419c4cfbd705b94312e584492918b087fe3fb0d1b3f8aab23dd5bbc79b6bc0fc", Pod:"calico-apiserver-7bd4f66f9c-79jbf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali154bb1c0825", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.371 [INFO][6078] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.371 [INFO][6078] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" iface="eth0" netns="" Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.371 [INFO][6078] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.371 [INFO][6078] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.424 [INFO][6086] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" HandleID="k8s-pod-network.29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.425 [INFO][6086] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.425 [INFO][6086] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.442 [WARNING][6086] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" HandleID="k8s-pod-network.29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.442 [INFO][6086] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" HandleID="k8s-pod-network.29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-calico--apiserver--7bd4f66f9c--79jbf-eth0" Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.444 [INFO][6086] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:29.449614 containerd[1828]: 2026-01-17 00:32:29.447 [INFO][6078] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5" Jan 17 00:32:29.449614 containerd[1828]: time="2026-01-17T00:32:29.449006212Z" level=info msg="TearDown network for sandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\" successfully" Jan 17 00:32:29.519208 containerd[1828]: time="2026-01-17T00:32:29.519132555Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:32:29.519437 containerd[1828]: time="2026-01-17T00:32:29.519242658Z" level=info msg="RemovePodSandbox \"29f0dd7e9a19c23fbb635c5b856a5b4c38c19713d3b48982c5e46dec3eee2ba5\" returns successfully" Jan 17 00:32:29.520190 containerd[1828]: time="2026-01-17T00:32:29.520138283Z" level=info msg="StopPodSandbox for \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\"" Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.582 [WARNING][6100] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3b534c16-0d44-4e13-804d-f2f891a56a96", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee", Pod:"coredns-668d6bf9bc-dq7hz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86f12d95b19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.583 [INFO][6100] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.583 [INFO][6100] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" iface="eth0" netns="" Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.583 [INFO][6100] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.583 [INFO][6100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.635 [INFO][6107] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" HandleID="k8s-pod-network.36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.635 [INFO][6107] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.635 [INFO][6107] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.643 [WARNING][6107] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" HandleID="k8s-pod-network.36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.644 [INFO][6107] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" HandleID="k8s-pod-network.36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.645 [INFO][6107] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:29.649960 containerd[1828]: 2026-01-17 00:32:29.647 [INFO][6100] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:32:29.652713 containerd[1828]: time="2026-01-17T00:32:29.650035784Z" level=info msg="TearDown network for sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\" successfully" Jan 17 00:32:29.652713 containerd[1828]: time="2026-01-17T00:32:29.650078085Z" level=info msg="StopPodSandbox for \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\" returns successfully" Jan 17 00:32:29.652713 containerd[1828]: time="2026-01-17T00:32:29.650805005Z" level=info msg="RemovePodSandbox for \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\"" Jan 17 00:32:29.652713 containerd[1828]: time="2026-01-17T00:32:29.650848607Z" level=info msg="Forcibly stopping sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\"" Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.718 [WARNING][6121] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3b534c16-0d44-4e13-804d-f2f891a56a96", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 30, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-2e1a0c4804", ContainerID:"76185d639d8f3a34ce138be728a92f4a3ce93774068b4c82f1b3e3da2b877fee", Pod:"coredns-668d6bf9bc-dq7hz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali86f12d95b19", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.719 [INFO][6121] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.719 [INFO][6121] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" iface="eth0" netns="" Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.719 [INFO][6121] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.719 [INFO][6121] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.760 [INFO][6128] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" HandleID="k8s-pod-network.36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.761 [INFO][6128] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.761 [INFO][6128] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.775 [WARNING][6128] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" HandleID="k8s-pod-network.36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.775 [INFO][6128] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" HandleID="k8s-pod-network.36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Workload="ci--4081.3.6--n--2e1a0c4804-k8s-coredns--668d6bf9bc--dq7hz-eth0" Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.778 [INFO][6128] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:32:29.782934 containerd[1828]: 2026-01-17 00:32:29.780 [INFO][6121] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f" Jan 17 00:32:29.783656 containerd[1828]: time="2026-01-17T00:32:29.783016570Z" level=info msg="TearDown network for sandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\" successfully" Jan 17 00:32:29.823248 containerd[1828]: time="2026-01-17T00:32:29.823157783Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:32:29.823802 containerd[1828]: time="2026-01-17T00:32:29.823568594Z" level=info msg="RemovePodSandbox \"36cf0ae218c667d0458a2d094e8b5456a2bf569f250a6cd2e45dd788a005259f\" returns successfully" Jan 17 00:32:32.840998 kubelet[3406]: E0117 00:32:32.840859 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:32:32.842103 kubelet[3406]: E0117 00:32:32.841580 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:32:32.890463 systemd[1]: Started sshd@9-10.200.8.33:22-10.200.16.10:55974.service - OpenSSH per-connection server daemon (10.200.16.10:55974). Jan 17 00:32:33.565774 sshd[6134]: Accepted publickey for core from 10.200.16.10 port 55974 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:32:33.567009 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:32:33.575838 systemd-logind[1811]: New session 12 of user core. Jan 17 00:32:33.582181 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:32:34.132653 sshd[6134]: pam_unix(sshd:session): session closed for user core Jan 17 00:32:34.138978 systemd[1]: sshd@9-10.200.8.33:22-10.200.16.10:55974.service: Deactivated successfully. Jan 17 00:32:34.145187 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:32:34.147218 systemd-logind[1811]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:32:34.148886 systemd-logind[1811]: Removed session 12. Jan 17 00:32:34.248050 systemd[1]: Started sshd@10-10.200.8.33:22-10.200.16.10:55984.service - OpenSSH per-connection server daemon (10.200.16.10:55984). Jan 17 00:32:34.893629 sshd[6151]: Accepted publickey for core from 10.200.16.10 port 55984 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:32:34.896253 sshd[6151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:32:34.905086 systemd-logind[1811]: New session 13 of user core. Jan 17 00:32:34.915897 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:32:35.645179 sshd[6151]: pam_unix(sshd:session): session closed for user core Jan 17 00:32:35.651910 systemd[1]: sshd@10-10.200.8.33:22-10.200.16.10:55984.service: Deactivated successfully. Jan 17 00:32:35.656705 systemd-logind[1811]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:32:35.658936 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:32:35.664936 systemd-logind[1811]: Removed session 13. Jan 17 00:32:35.761252 systemd[1]: Started sshd@11-10.200.8.33:22-10.200.16.10:55990.service - OpenSSH per-connection server daemon (10.200.16.10:55990). Jan 17 00:32:36.407709 sshd[6163]: Accepted publickey for core from 10.200.16.10 port 55990 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:32:36.410049 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:32:36.419849 systemd-logind[1811]: New session 14 of user core. Jan 17 00:32:36.423519 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:32:37.096457 sshd[6163]: pam_unix(sshd:session): session closed for user core Jan 17 00:32:37.107473 systemd[1]: sshd@11-10.200.8.33:22-10.200.16.10:55990.service: Deactivated successfully. Jan 17 00:32:37.123462 systemd-logind[1811]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:32:37.124774 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:32:37.127516 systemd-logind[1811]: Removed session 14. Jan 17 00:32:38.835791 kubelet[3406]: E0117 00:32:38.835298 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:32:38.836468 kubelet[3406]: E0117 00:32:38.835841 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:32:39.837707 kubelet[3406]: E0117 00:32:39.837632 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:32:40.834777 kubelet[3406]: E0117 00:32:40.833648 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:32:42.210589 systemd[1]: Started sshd@12-10.200.8.33:22-10.200.16.10:34456.service - OpenSSH per-connection server daemon (10.200.16.10:34456). Jan 17 00:32:42.880864 sshd[6181]: Accepted publickey for core from 10.200.16.10 port 34456 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:32:42.882319 sshd[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:32:42.891043 systemd-logind[1811]: New session 15 of user core. Jan 17 00:32:42.899867 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:32:43.451940 sshd[6181]: pam_unix(sshd:session): session closed for user core Jan 17 00:32:43.456381 systemd-logind[1811]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:32:43.461092 systemd[1]: sshd@12-10.200.8.33:22-10.200.16.10:34456.service: Deactivated successfully. Jan 17 00:32:43.468582 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:32:43.473518 systemd-logind[1811]: Removed session 15. Jan 17 00:32:47.834152 kubelet[3406]: E0117 00:32:47.832549 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:32:47.836435 containerd[1828]: time="2026-01-17T00:32:47.834070176Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:32:48.081071 containerd[1828]: time="2026-01-17T00:32:48.080956588Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:48.085084 containerd[1828]: time="2026-01-17T00:32:48.084891496Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:32:48.085084 containerd[1828]: time="2026-01-17T00:32:48.085044900Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:32:48.085781 kubelet[3406]: E0117 00:32:48.085700 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:32:48.085926 kubelet[3406]: E0117 00:32:48.085805 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:32:48.086796 kubelet[3406]: E0117 00:32:48.085990 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:b7d7352bb0c64a4eb1262e2afe0300e5,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jgdct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8dc795d65-glbln_calico-system(0b534c0b-2a92-45dc-b919-720218923434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:48.090476 containerd[1828]: time="2026-01-17T00:32:48.090430249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:32:48.346642 containerd[1828]: time="2026-01-17T00:32:48.345916097Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:48.348826 containerd[1828]: time="2026-01-17T00:32:48.348736275Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:32:48.348978 containerd[1828]: time="2026-01-17T00:32:48.348790677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:32:48.350194 kubelet[3406]: E0117 00:32:48.349159 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:32:48.350194 kubelet[3406]: E0117 00:32:48.349238 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:32:48.350194 kubelet[3406]: E0117 00:32:48.349417 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jgdct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-8dc795d65-glbln_calico-system(0b534c0b-2a92-45dc-b919-720218923434): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:48.352930 kubelet[3406]: E0117 00:32:48.352875 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:32:48.565260 systemd[1]: Started sshd@13-10.200.8.33:22-10.200.16.10:34466.service - OpenSSH per-connection server daemon (10.200.16.10:34466). Jan 17 00:32:49.220531 sshd[6201]: Accepted publickey for core from 10.200.16.10 port 34466 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:32:49.228415 sshd[6201]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:32:49.237827 systemd-logind[1811]: New session 16 of user core. Jan 17 00:32:49.248413 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:32:49.744523 sshd[6201]: pam_unix(sshd:session): session closed for user core Jan 17 00:32:49.751939 systemd[1]: sshd@13-10.200.8.33:22-10.200.16.10:34466.service: Deactivated successfully. Jan 17 00:32:49.759498 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:32:49.761233 systemd-logind[1811]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:32:49.765033 systemd-logind[1811]: Removed session 16. Jan 17 00:32:49.831547 kubelet[3406]: E0117 00:32:49.831486 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:32:51.832270 kubelet[3406]: E0117 00:32:51.831728 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:32:53.832881 containerd[1828]: time="2026-01-17T00:32:53.832825521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:32:54.081765 containerd[1828]: time="2026-01-17T00:32:54.081668461Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:54.085200 containerd[1828]: time="2026-01-17T00:32:54.084967151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:32:54.085200 containerd[1828]: time="2026-01-17T00:32:54.085032152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:32:54.086078 kubelet[3406]: E0117 00:32:54.085309 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:32:54.086078 kubelet[3406]: E0117 00:32:54.085394 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:32:54.086078 kubelet[3406]: E0117 00:32:54.085599 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wrrhm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7fddb47c6b-xwhmv_calico-system(f248d2c0-f221-4bde-8ea2-75ac2344f18d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:54.087398 kubelet[3406]: E0117 00:32:54.087218 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:32:54.840790 containerd[1828]: time="2026-01-17T00:32:54.840048303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:32:54.870281 systemd[1]: Started sshd@14-10.200.8.33:22-10.200.16.10:55624.service - OpenSSH per-connection server daemon (10.200.16.10:55624). Jan 17 00:32:55.101110 containerd[1828]: time="2026-01-17T00:32:55.100873768Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:55.106499 containerd[1828]: time="2026-01-17T00:32:55.106402718Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:32:55.107779 containerd[1828]: time="2026-01-17T00:32:55.106570822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:32:55.107905 kubelet[3406]: E0117 00:32:55.106880 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:32:55.107905 kubelet[3406]: E0117 00:32:55.106969 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:32:55.107905 kubelet[3406]: E0117 00:32:55.107156 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nq5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnm26_calico-system(a7052c5c-a862-4e62-a623-7782ea46a871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:55.110403 containerd[1828]: time="2026-01-17T00:32:55.110345124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:32:55.349686 containerd[1828]: time="2026-01-17T00:32:55.349616505Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:32:55.353213 containerd[1828]: time="2026-01-17T00:32:55.353026398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:32:55.353213 containerd[1828]: time="2026-01-17T00:32:55.353182302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:32:55.353758 kubelet[3406]: E0117 00:32:55.353680 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:32:55.353884 kubelet[3406]: E0117 00:32:55.353779 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:32:55.354028 kubelet[3406]: E0117 00:32:55.353969 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nq5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bnm26_calico-system(a7052c5c-a862-4e62-a623-7782ea46a871): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:32:55.356681 kubelet[3406]: E0117 00:32:55.356417 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:32:55.557446 sshd[6217]: Accepted publickey for core from 10.200.16.10 port 55624 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:32:55.562493 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:32:55.571601 systemd-logind[1811]: New session 17 of user core. Jan 17 00:32:55.578950 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:32:56.122316 sshd[6217]: pam_unix(sshd:session): session closed for user core Jan 17 00:32:56.128275 systemd[1]: sshd@14-10.200.8.33:22-10.200.16.10:55624.service: Deactivated successfully. Jan 17 00:32:56.142107 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:32:56.144478 systemd-logind[1811]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:32:56.146230 systemd-logind[1811]: Removed session 17. Jan 17 00:32:59.836885 containerd[1828]: time="2026-01-17T00:32:59.836223849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:33:00.085988 containerd[1828]: time="2026-01-17T00:33:00.085671650Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:33:00.090947 containerd[1828]: time="2026-01-17T00:33:00.089029732Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:33:00.090947 containerd[1828]: time="2026-01-17T00:33:00.089173335Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:33:00.091354 kubelet[3406]: E0117 00:33:00.089735 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:33:00.091354 kubelet[3406]: E0117 00:33:00.090026 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:33:00.093310 kubelet[3406]: E0117 00:33:00.093248 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q9cw4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jt8r9_calico-system(086626e6-23d7-433b-8fe2-380f0110d591): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:33:00.094560 kubelet[3406]: E0117 00:33:00.094476 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:33:00.841888 containerd[1828]: time="2026-01-17T00:33:00.841754543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:33:00.846682 kubelet[3406]: E0117 00:33:00.846574 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:33:01.105732 containerd[1828]: time="2026-01-17T00:33:01.105056483Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:33:01.110776 containerd[1828]: time="2026-01-17T00:33:01.109999503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:33:01.111126 containerd[1828]: time="2026-01-17T00:33:01.110030704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:33:01.116216 kubelet[3406]: E0117 00:33:01.116155 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:33:01.117587 kubelet[3406]: E0117 00:33:01.116596 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:33:01.118054 kubelet[3406]: E0117 00:33:01.117868 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sp24j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bd4f66f9c-4tl94_calico-apiserver(a5246904-0f9d-4a5a-ba58-a0d97b0128df): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:33:01.120212 kubelet[3406]: E0117 00:33:01.120174 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:33:01.235608 systemd[1]: Started sshd@15-10.200.8.33:22-10.200.16.10:43766.service - OpenSSH per-connection server daemon (10.200.16.10:43766). Jan 17 00:33:01.895447 sshd[6251]: Accepted publickey for core from 10.200.16.10 port 43766 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:33:01.901011 sshd[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:33:01.907443 systemd-logind[1811]: New session 18 of user core. Jan 17 00:33:01.913357 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:33:02.611060 sshd[6251]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:02.619198 systemd[1]: sshd@15-10.200.8.33:22-10.200.16.10:43766.service: Deactivated successfully. Jan 17 00:33:02.632465 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:33:02.634459 systemd-logind[1811]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:33:02.636359 systemd-logind[1811]: Removed session 18. Jan 17 00:33:02.724410 systemd[1]: Started sshd@16-10.200.8.33:22-10.200.16.10:43780.service - OpenSSH per-connection server daemon (10.200.16.10:43780). Jan 17 00:33:03.386084 sshd[6265]: Accepted publickey for core from 10.200.16.10 port 43780 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:33:03.388179 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:33:03.397046 systemd-logind[1811]: New session 19 of user core. Jan 17 00:33:03.401973 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:33:03.834612 containerd[1828]: time="2026-01-17T00:33:03.832766098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:33:04.076470 containerd[1828]: time="2026-01-17T00:33:04.076178952Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:33:04.079451 containerd[1828]: time="2026-01-17T00:33:04.079227226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:33:04.079451 containerd[1828]: time="2026-01-17T00:33:04.079365430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:33:04.081774 kubelet[3406]: E0117 00:33:04.079873 3406 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:33:04.081774 kubelet[3406]: E0117 00:33:04.079945 3406 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:33:04.081774 kubelet[3406]: E0117 00:33:04.080131 3406 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-prbwh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bd4f66f9c-79jbf_calico-apiserver(4cec6c0e-e80c-4688-94c8-dc0543670d3f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:33:04.085043 kubelet[3406]: E0117 00:33:04.084855 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:33:04.144063 sshd[6265]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:04.149466 systemd-logind[1811]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:33:04.151598 systemd[1]: sshd@16-10.200.8.33:22-10.200.16.10:43780.service: Deactivated successfully. Jan 17 00:33:04.167842 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:33:04.171215 systemd-logind[1811]: Removed session 19. Jan 17 00:33:04.256414 systemd[1]: Started sshd@17-10.200.8.33:22-10.200.16.10:43786.service - OpenSSH per-connection server daemon (10.200.16.10:43786). Jan 17 00:33:04.835553 kubelet[3406]: E0117 00:33:04.835264 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:33:04.913775 sshd[6279]: Accepted publickey for core from 10.200.16.10 port 43786 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:33:04.917874 sshd[6279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:33:04.932856 systemd-logind[1811]: New session 20 of user core. Jan 17 00:33:04.939154 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:33:06.253069 sshd[6279]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:06.263328 systemd[1]: sshd@17-10.200.8.33:22-10.200.16.10:43786.service: Deactivated successfully. Jan 17 00:33:06.275453 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:33:06.278054 systemd-logind[1811]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:33:06.283333 systemd-logind[1811]: Removed session 20. Jan 17 00:33:06.374853 systemd[1]: Started sshd@18-10.200.8.33:22-10.200.16.10:43802.service - OpenSSH per-connection server daemon (10.200.16.10:43802). Jan 17 00:33:07.051861 sshd[6317]: Accepted publickey for core from 10.200.16.10 port 43802 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:33:07.055853 sshd[6317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:33:07.073209 systemd-logind[1811]: New session 21 of user core. Jan 17 00:33:07.082245 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:33:07.843830 sshd[6317]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:07.850237 systemd[1]: sshd@18-10.200.8.33:22-10.200.16.10:43802.service: Deactivated successfully. Jan 17 00:33:07.866142 systemd-logind[1811]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:33:07.868623 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:33:07.872729 systemd-logind[1811]: Removed session 21. Jan 17 00:33:07.961237 systemd[1]: Started sshd@19-10.200.8.33:22-10.200.16.10:43810.service - OpenSSH per-connection server daemon (10.200.16.10:43810). Jan 17 00:33:08.600194 sshd[6331]: Accepted publickey for core from 10.200.16.10 port 43810 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:33:08.603131 sshd[6331]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:33:08.613148 systemd-logind[1811]: New session 22 of user core. Jan 17 00:33:08.619305 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:33:08.846433 kubelet[3406]: E0117 00:33:08.846363 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:33:09.155613 sshd[6331]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:09.162439 systemd-logind[1811]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:33:09.163921 systemd[1]: sshd@19-10.200.8.33:22-10.200.16.10:43810.service: Deactivated successfully. Jan 17 00:33:09.173201 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:33:09.175327 systemd-logind[1811]: Removed session 22. Jan 17 00:33:13.836876 kubelet[3406]: E0117 00:33:13.835987 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:33:14.270216 systemd[1]: Started sshd@20-10.200.8.33:22-10.200.16.10:45348.service - OpenSSH per-connection server daemon (10.200.16.10:45348). Jan 17 00:33:14.840880 kubelet[3406]: E0117 00:33:14.840824 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:33:14.928395 sshd[6345]: Accepted publickey for core from 10.200.16.10 port 45348 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:33:14.931298 sshd[6345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:33:14.941413 systemd-logind[1811]: New session 23 of user core. Jan 17 00:33:14.946128 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:33:15.489156 sshd[6345]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:15.498197 systemd[1]: sshd@20-10.200.8.33:22-10.200.16.10:45348.service: Deactivated successfully. Jan 17 00:33:15.508952 systemd-logind[1811]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:33:15.510398 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:33:15.512867 systemd-logind[1811]: Removed session 23. Jan 17 00:33:15.836790 kubelet[3406]: E0117 00:33:15.835960 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:33:16.837567 kubelet[3406]: E0117 00:33:16.837494 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:33:18.838800 kubelet[3406]: E0117 00:33:18.838438 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:33:20.604214 systemd[1]: Started sshd@21-10.200.8.33:22-10.200.16.10:35352.service - OpenSSH per-connection server daemon (10.200.16.10:35352). Jan 17 00:33:21.259565 sshd[6361]: Accepted publickey for core from 10.200.16.10 port 35352 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:33:21.260765 sshd[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:33:21.269184 systemd-logind[1811]: New session 24 of user core. Jan 17 00:33:21.277519 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:33:21.946044 sshd[6361]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:21.951632 systemd-logind[1811]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:33:21.953623 systemd[1]: sshd@21-10.200.8.33:22-10.200.16.10:35352.service: Deactivated successfully. Jan 17 00:33:21.959627 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:33:21.964246 systemd-logind[1811]: Removed session 24. Jan 17 00:33:22.843772 kubelet[3406]: E0117 00:33:22.840071 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:33:25.833084 kubelet[3406]: E0117 00:33:25.831644 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:33:27.060701 systemd[1]: Started sshd@22-10.200.8.33:22-10.200.16.10:35364.service - OpenSSH per-connection server daemon (10.200.16.10:35364). Jan 17 00:33:27.733928 sshd[6399]: Accepted publickey for core from 10.200.16.10 port 35364 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:33:27.735507 sshd[6399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:33:27.744137 systemd-logind[1811]: New session 25 of user core. Jan 17 00:33:27.752909 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:33:28.341954 sshd[6399]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:28.354695 systemd[1]: sshd@22-10.200.8.33:22-10.200.16.10:35364.service: Deactivated successfully. Jan 17 00:33:28.365467 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:33:28.367393 systemd-logind[1811]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:33:28.369714 systemd-logind[1811]: Removed session 25. Jan 17 00:33:28.835211 kubelet[3406]: E0117 00:33:28.835150 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:33:29.834479 kubelet[3406]: E0117 00:33:29.834015 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:33:31.833390 kubelet[3406]: E0117 00:33:31.832873 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:33:31.835022 kubelet[3406]: E0117 00:33:31.834189 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:33:33.458466 systemd[1]: Started sshd@23-10.200.8.33:22-10.200.16.10:49424.service - OpenSSH per-connection server daemon (10.200.16.10:49424). Jan 17 00:33:34.111662 sshd[6414]: Accepted publickey for core from 10.200.16.10 port 49424 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:33:34.114881 sshd[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:33:34.120322 systemd-logind[1811]: New session 26 of user core. Jan 17 00:33:34.129081 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:33:34.692106 sshd[6414]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:34.707260 systemd[1]: sshd@23-10.200.8.33:22-10.200.16.10:49424.service: Deactivated successfully. Jan 17 00:33:34.725049 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:33:34.733452 systemd-logind[1811]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:33:34.737015 systemd-logind[1811]: Removed session 26. Jan 17 00:33:34.838095 kubelet[3406]: E0117 00:33:34.836968 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871" Jan 17 00:33:37.832724 kubelet[3406]: E0117 00:33:37.832625 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-4tl94" podUID="a5246904-0f9d-4a5a-ba58-a0d97b0128df" Jan 17 00:33:39.804366 systemd[1]: Started sshd@24-10.200.8.33:22-10.200.16.10:41180.service - OpenSSH per-connection server daemon (10.200.16.10:41180). Jan 17 00:33:40.453342 sshd[6430]: Accepted publickey for core from 10.200.16.10 port 41180 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:33:40.455299 sshd[6430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:33:40.461204 systemd-logind[1811]: New session 27 of user core. Jan 17 00:33:40.467272 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:33:40.842973 kubelet[3406]: E0117 00:33:40.841441 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-8dc795d65-glbln" podUID="0b534c0b-2a92-45dc-b919-720218923434" Jan 17 00:33:41.032052 sshd[6430]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:41.040764 systemd[1]: sshd@24-10.200.8.33:22-10.200.16.10:41180.service: Deactivated successfully. Jan 17 00:33:41.042841 systemd-logind[1811]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:33:41.053524 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:33:41.058799 systemd-logind[1811]: Removed session 27. Jan 17 00:33:42.839889 kubelet[3406]: E0117 00:33:42.836700 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jt8r9" podUID="086626e6-23d7-433b-8fe2-380f0110d591" Jan 17 00:33:45.832190 kubelet[3406]: E0117 00:33:45.832056 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7fddb47c6b-xwhmv" podUID="f248d2c0-f221-4bde-8ea2-75ac2344f18d" Jan 17 00:33:45.835817 kubelet[3406]: E0117 00:33:45.835172 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bd4f66f9c-79jbf" podUID="4cec6c0e-e80c-4688-94c8-dc0543670d3f" Jan 17 00:33:46.155192 systemd[1]: Started sshd@25-10.200.8.33:22-10.200.16.10:41194.service - OpenSSH per-connection server daemon (10.200.16.10:41194). Jan 17 00:33:46.827153 sshd[6444]: Accepted publickey for core from 10.200.16.10 port 41194 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:33:46.829296 sshd[6444]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:33:46.845395 systemd-logind[1811]: New session 28 of user core. Jan 17 00:33:46.851203 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 00:33:47.400088 sshd[6444]: pam_unix(sshd:session): session closed for user core Jan 17 00:33:47.409509 systemd-logind[1811]: Session 28 logged out. Waiting for processes to exit. Jan 17 00:33:47.410584 systemd[1]: sshd@25-10.200.8.33:22-10.200.16.10:41194.service: Deactivated successfully. Jan 17 00:33:47.427139 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 00:33:47.428523 systemd-logind[1811]: Removed session 28. Jan 17 00:33:48.841923 kubelet[3406]: E0117 00:33:48.841834 3406 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bnm26" podUID="a7052c5c-a862-4e62-a623-7782ea46a871"