Jan 24 00:43:16.088973 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:43:16.088999 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:43:16.089010 kernel: BIOS-provided physical RAM map: Jan 24 00:43:16.089016 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:43:16.089022 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 24 00:43:16.089030 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 24 00:43:16.089037 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 24 00:43:16.089046 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 24 00:43:16.089055 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 24 00:43:16.089061 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 24 00:43:16.089067 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 24 00:43:16.089075 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 24 00:43:16.089083 kernel: printk: bootconsole [earlyser0] enabled Jan 24 00:43:16.089089 kernel: NX (Execute Disable) protection: active Jan 24 00:43:16.089101 kernel: APIC: Static calls initialized Jan 24 00:43:16.089109 kernel: efi: EFI v2.7 by Microsoft Jan 24 00:43:16.089116 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Jan 24 00:43:16.089125 kernel: SMBIOS 3.1.0 present. Jan 24 00:43:16.089133 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 24 00:43:16.089139 kernel: Hypervisor detected: Microsoft Hyper-V Jan 24 00:43:16.089146 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 24 00:43:16.089155 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jan 24 00:43:16.089163 kernel: Hyper-V: Nested features: 0x1e0101 Jan 24 00:43:16.089170 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 24 00:43:16.089181 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 24 00:43:16.089188 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:43:16.089195 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:43:16.089205 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 24 00:43:16.089213 kernel: tsc: Detected 2593.907 MHz processor Jan 24 00:43:16.089220 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:43:16.089230 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:43:16.089238 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 24 00:43:16.089245 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:43:16.089257 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:43:16.089264 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 24 00:43:16.089274 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 24 00:43:16.089281 kernel: Using GB pages for direct mapping Jan 24 00:43:16.089287 kernel: Secure boot disabled Jan 24 00:43:16.089297 kernel: ACPI: Early table checksum verification disabled Jan 24 00:43:16.089305 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 24 00:43:16.089318 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089328 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089336 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 24 00:43:16.089346 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 24 00:43:16.089354 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089362 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089372 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089382 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089392 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089400 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089407 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089418 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 24 00:43:16.089425 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 24 00:43:16.089433 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 24 00:43:16.089443 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 24 00:43:16.089453 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 24 00:43:16.089461 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 24 00:43:16.089471 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 24 00:43:16.089478 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 24 00:43:16.089487 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 24 00:43:16.089496 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 24 00:43:16.089503 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:43:16.089512 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:43:16.089521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 24 00:43:16.089531 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 24 00:43:16.089541 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 24 00:43:16.089549 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 24 00:43:16.089556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 24 00:43:16.089567 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 24 00:43:16.089574 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 24 00:43:16.089581 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 24 00:43:16.089592 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 24 00:43:16.089599 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 24 00:43:16.089611 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 24 00:43:16.089620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 24 00:43:16.089627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 24 00:43:16.089636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 24 00:43:16.089645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 24 00:43:16.089652 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 24 00:43:16.089662 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 24 00:43:16.089670 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 24 00:43:16.089688 kernel: Zone ranges: Jan 24 00:43:16.089699 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:43:16.089710 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:43:16.089717 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:43:16.089726 kernel: Movable zone start for each node Jan 24 00:43:16.089735 kernel: Early memory node ranges Jan 24 00:43:16.089742 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:43:16.089752 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 24 00:43:16.089760 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 24 00:43:16.089768 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:43:16.089780 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 24 00:43:16.089788 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:43:16.089795 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:43:16.089806 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 24 00:43:16.089813 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 24 00:43:16.089821 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 24 00:43:16.089831 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:43:16.089838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:43:16.089847 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:43:16.089858 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 24 00:43:16.089866 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:43:16.089876 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 24 00:43:16.089884 kernel: Booting paravirtualized kernel on Hyper-V Jan 24 00:43:16.089892 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:43:16.089902 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:43:16.089909 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:43:16.089917 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:43:16.089927 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:43:16.089936 kernel: Hyper-V: PV spinlocks enabled Jan 24 00:43:16.089945 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:43:16.089955 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:43:16.089963 kernel: random: crng init done Jan 24 00:43:16.089972 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 24 00:43:16.089981 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:43:16.089988 kernel: Fallback order for Node 0: 0 Jan 24 00:43:16.089998 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 24 00:43:16.090009 kernel: Policy zone: Normal Jan 24 00:43:16.090026 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:43:16.090034 kernel: software IO TLB: area num 2. Jan 24 00:43:16.090045 kernel: Memory: 8077080K/8387460K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 310120K reserved, 0K cma-reserved) Jan 24 00:43:16.090056 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:43:16.090063 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:43:16.090072 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:43:16.090082 kernel: Dynamic Preempt: voluntary Jan 24 00:43:16.090093 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:43:16.090102 kernel: rcu: RCU event tracing is enabled. Jan 24 00:43:16.090114 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:43:16.090124 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:43:16.090131 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:43:16.090142 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:43:16.090150 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:43:16.090158 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:43:16.090171 kernel: Using NULL legacy PIC Jan 24 00:43:16.090179 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 24 00:43:16.090190 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:43:16.090198 kernel: Console: colour dummy device 80x25 Jan 24 00:43:16.090206 kernel: printk: console [tty1] enabled Jan 24 00:43:16.090217 kernel: printk: console [ttyS0] enabled Jan 24 00:43:16.090224 kernel: printk: bootconsole [earlyser0] disabled Jan 24 00:43:16.090233 kernel: ACPI: Core revision 20230628 Jan 24 00:43:16.090243 kernel: Failed to register legacy timer interrupt Jan 24 00:43:16.090251 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:43:16.090264 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 24 00:43:16.090272 kernel: Hyper-V: Using IPI hypercalls Jan 24 00:43:16.090280 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 24 00:43:16.090291 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 24 00:43:16.090299 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 24 00:43:16.090309 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 24 00:43:16.090318 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 24 00:43:16.090326 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 24 00:43:16.090337 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 24 00:43:16.090347 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:43:16.090356 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:43:16.090366 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:43:16.090374 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:43:16.090384 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:43:16.090393 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:43:16.090401 kernel: RETBleed: Vulnerable Jan 24 00:43:16.090412 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:43:16.090419 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:43:16.090427 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:43:16.090440 kernel: active return thunk: its_return_thunk Jan 24 00:43:16.090448 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:43:16.090458 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:43:16.090466 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:43:16.090474 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:43:16.090485 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:43:16.090494 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:43:16.090504 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:43:16.090512 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:43:16.090522 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:43:16.090530 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:43:16.090541 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:43:16.090552 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 24 00:43:16.090559 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:43:16.090569 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:43:16.090578 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:43:16.090586 kernel: landlock: Up and running. Jan 24 00:43:16.090596 kernel: SELinux: Initializing. Jan 24 00:43:16.090604 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:43:16.090612 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:43:16.090623 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:43:16.090632 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:43:16.090644 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:43:16.090655 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:43:16.090664 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:43:16.090686 kernel: signal: max sigframe size: 3632 Jan 24 00:43:16.090703 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:43:16.090723 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:43:16.090741 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:43:16.090757 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:43:16.090773 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:43:16.090799 kernel: .... node #0, CPUs: #1 Jan 24 00:43:16.090814 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 24 00:43:16.090830 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:43:16.090844 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:43:16.090860 kernel: smpboot: Max logical packages: 1 Jan 24 00:43:16.090876 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 24 00:43:16.090895 kernel: devtmpfs: initialized Jan 24 00:43:16.090912 kernel: x86/mm: Memory block size: 128MB Jan 24 00:43:16.090938 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 24 00:43:16.090956 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:43:16.090971 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:43:16.090987 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:43:16.091004 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:43:16.091025 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:43:16.091041 kernel: audit: type=2000 audit(1769215395.028:1): state=initialized audit_enabled=0 res=1 Jan 24 00:43:16.091057 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:43:16.091071 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:43:16.091094 kernel: cpuidle: using governor menu Jan 24 00:43:16.091109 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:43:16.091124 kernel: dca service started, version 1.12.1 Jan 24 00:43:16.091140 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 24 00:43:16.091156 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:43:16.091174 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:43:16.091190 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:43:16.091208 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:43:16.091223 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:43:16.091244 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:43:16.091262 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:43:16.091281 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:43:16.091298 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:43:16.091314 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:43:16.091330 kernel: ACPI: Interpreter enabled Jan 24 00:43:16.091346 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:43:16.091363 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:43:16.091378 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:43:16.091398 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 24 00:43:16.091411 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 24 00:43:16.091424 kernel: iommu: Default domain type: Translated Jan 24 00:43:16.091448 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:43:16.091463 kernel: efivars: Registered efivars operations Jan 24 00:43:16.091478 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:43:16.091493 kernel: PCI: System does not support PCI Jan 24 00:43:16.091506 kernel: vgaarb: loaded Jan 24 00:43:16.091521 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 24 00:43:16.091540 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:43:16.091553 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:43:16.091568 kernel: pnp: PnP ACPI init Jan 24 00:43:16.091583 kernel: pnp: PnP ACPI: found 3 devices Jan 24 00:43:16.091598 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:43:16.091613 kernel: NET: Registered PF_INET protocol family Jan 24 00:43:16.091628 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:43:16.091643 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 24 00:43:16.091658 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:43:16.091683 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:43:16.091699 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 24 00:43:16.091714 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 24 00:43:16.091729 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:43:16.091744 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:43:16.091759 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:43:16.091774 kernel: NET: Registered PF_XDP protocol family Jan 24 00:43:16.091789 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:43:16.091804 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:43:16.091822 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Jan 24 00:43:16.091837 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:43:16.091852 kernel: Initialise system trusted keyrings Jan 24 00:43:16.091866 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 24 00:43:16.091885 kernel: Key type asymmetric registered Jan 24 00:43:16.091900 kernel: Asymmetric key parser 'x509' registered Jan 24 00:43:16.091913 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:43:16.091928 kernel: io scheduler mq-deadline registered Jan 24 00:43:16.091943 kernel: io scheduler kyber registered Jan 24 00:43:16.091960 kernel: io scheduler bfq registered Jan 24 00:43:16.091975 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:43:16.091990 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:43:16.092005 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:43:16.092020 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 24 00:43:16.092035 kernel: i8042: PNP: No PS/2 controller found. Jan 24 00:43:16.092218 kernel: rtc_cmos 00:02: registered as rtc0 Jan 24 00:43:16.092346 kernel: rtc_cmos 00:02: setting system clock to 2026-01-24T00:43:15 UTC (1769215395) Jan 24 00:43:16.092469 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 24 00:43:16.092488 kernel: intel_pstate: CPU model not supported Jan 24 00:43:16.092503 kernel: efifb: probing for efifb Jan 24 00:43:16.092518 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 24 00:43:16.092533 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 24 00:43:16.092548 kernel: efifb: scrolling: redraw Jan 24 00:43:16.092563 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:43:16.092579 kernel: Console: switching to colour frame buffer device 128x48 Jan 24 00:43:16.092594 kernel: fb0: EFI VGA frame buffer device Jan 24 00:43:16.092612 kernel: pstore: Using crash dump compression: deflate Jan 24 00:43:16.092627 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:43:16.092641 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:43:16.092656 kernel: Segment Routing with IPv6 Jan 24 00:43:16.092671 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:43:16.092707 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:43:16.092722 kernel: Key type dns_resolver registered Jan 24 00:43:16.092736 kernel: IPI shorthand broadcast: enabled Jan 24 00:43:16.092751 kernel: sched_clock: Marking stable (836099100, 46759600)->(1089834600, -206975900) Jan 24 00:43:16.092770 kernel: registered taskstats version 1 Jan 24 00:43:16.092784 kernel: Loading compiled-in X.509 certificates Jan 24 00:43:16.092799 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:43:16.092813 kernel: Key type .fscrypt registered Jan 24 00:43:16.092828 kernel: Key type fscrypt-provisioning registered Jan 24 00:43:16.092843 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:43:16.092858 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:43:16.092873 kernel: ima: No architecture policies found Jan 24 00:43:16.092888 kernel: clk: Disabling unused clocks Jan 24 00:43:16.092906 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:43:16.092921 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:43:16.092936 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:43:16.092951 kernel: Run /init as init process Jan 24 00:43:16.092966 kernel: with arguments: Jan 24 00:43:16.092981 kernel: /init Jan 24 00:43:16.092995 kernel: with environment: Jan 24 00:43:16.093010 kernel: HOME=/ Jan 24 00:43:16.093024 kernel: TERM=linux Jan 24 00:43:16.093044 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:43:16.093063 systemd[1]: Detected virtualization microsoft. Jan 24 00:43:16.093079 systemd[1]: Detected architecture x86-64. Jan 24 00:43:16.093094 systemd[1]: Running in initrd. Jan 24 00:43:16.093110 systemd[1]: No hostname configured, using default hostname. Jan 24 00:43:16.093125 systemd[1]: Hostname set to . Jan 24 00:43:16.093141 systemd[1]: Initializing machine ID from random generator. Jan 24 00:43:16.093160 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:43:16.093175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:43:16.093191 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:43:16.093208 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:43:16.093224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:43:16.093239 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:43:16.093256 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:43:16.093277 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:43:16.093293 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:43:16.093309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:43:16.093325 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:43:16.093341 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:43:16.093356 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:43:16.093372 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:43:16.093388 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:43:16.093407 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:43:16.093423 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:43:16.093439 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:43:16.093454 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:43:16.093471 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:43:16.093487 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:43:16.093503 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:43:16.093519 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:43:16.093535 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:43:16.093553 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:43:16.093569 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:43:16.093585 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:43:16.093601 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:43:16.093617 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:43:16.093633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:43:16.093672 systemd-journald[177]: Collecting audit messages is disabled. Jan 24 00:43:16.093716 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:43:16.093732 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:43:16.093748 systemd-journald[177]: Journal started Jan 24 00:43:16.093781 systemd-journald[177]: Runtime Journal (/run/log/journal/71351a49e5b441f6b3971c7bde900b42) is 8.0M, max 158.8M, 150.8M free. Jan 24 00:43:16.082038 systemd-modules-load[178]: Inserted module 'overlay' Jan 24 00:43:16.108943 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:43:16.106479 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:43:16.117872 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:43:16.130357 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:43:16.135902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:43:16.146383 kernel: Bridge firewalling registered Jan 24 00:43:16.140080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:16.143351 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 24 00:43:16.149629 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:43:16.158170 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:43:16.165639 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:43:16.174869 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:43:16.185810 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:43:16.197486 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:43:16.204453 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:43:16.217825 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:43:16.223414 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:43:16.226889 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:43:16.241746 dracut-cmdline[209]: dracut-dracut-053 Jan 24 00:43:16.235999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:43:16.250190 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:43:16.300008 systemd-resolved[220]: Positive Trust Anchors: Jan 24 00:43:16.300021 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:43:16.300071 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:43:16.328412 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 24 00:43:16.332086 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:43:16.339411 kernel: SCSI subsystem initialized Jan 24 00:43:16.339566 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:43:16.351694 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:43:16.362700 kernel: iscsi: registered transport (tcp) Jan 24 00:43:16.384116 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:43:16.384167 kernel: QLogic iSCSI HBA Driver Jan 24 00:43:16.419386 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:43:16.431990 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:43:16.465116 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:43:16.465176 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:43:16.468498 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:43:16.507697 kernel: raid6: avx512x4 gen() 18279 MB/s Jan 24 00:43:16.527695 kernel: raid6: avx512x2 gen() 18380 MB/s Jan 24 00:43:16.546687 kernel: raid6: avx512x1 gen() 18386 MB/s Jan 24 00:43:16.565688 kernel: raid6: avx2x4 gen() 18224 MB/s Jan 24 00:43:16.584694 kernel: raid6: avx2x2 gen() 18270 MB/s Jan 24 00:43:16.605169 kernel: raid6: avx2x1 gen() 13992 MB/s Jan 24 00:43:16.605207 kernel: raid6: using algorithm avx512x1 gen() 18386 MB/s Jan 24 00:43:16.627114 kernel: raid6: .... xor() 26859 MB/s, rmw enabled Jan 24 00:43:16.627162 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:43:16.649700 kernel: xor: automatically using best checksumming function avx Jan 24 00:43:16.796705 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:43:16.805736 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:43:16.816829 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:43:16.829173 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 24 00:43:16.833620 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:43:16.853178 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:43:16.867714 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 24 00:43:16.894186 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:43:16.910825 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:43:16.951018 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:43:16.962906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:43:16.992339 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:43:17.003440 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:43:17.006841 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:43:17.010248 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:43:17.025593 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:43:17.045518 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:43:17.059706 kernel: hv_vmbus: Vmbus version:5.2 Jan 24 00:43:17.067838 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:43:17.081716 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 24 00:43:17.089695 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 24 00:43:17.094872 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:43:17.097767 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:43:17.101296 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:43:17.107475 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:43:17.107540 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:17.117138 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:43:17.141627 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 24 00:43:17.141654 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:43:17.141687 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 24 00:43:17.141798 kernel: hv_vmbus: registering driver hid_hyperv Jan 24 00:43:17.138952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:43:17.150736 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 24 00:43:17.150760 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 24 00:43:17.167704 kernel: hv_vmbus: registering driver hv_netvsc Jan 24 00:43:17.167899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:43:17.168079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:17.180516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:43:17.202036 kernel: PTP clock support registered Jan 24 00:43:17.202094 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:43:17.207347 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:17.218811 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:43:17.230807 kernel: AES CTR mode by8 optimization enabled Jan 24 00:43:17.230845 kernel: hv_vmbus: registering driver hv_storvsc Jan 24 00:43:17.230857 kernel: hv_utils: Registering HyperV Utility Driver Jan 24 00:43:17.234637 kernel: hv_vmbus: registering driver hv_utils Jan 24 00:43:17.239858 kernel: hv_utils: Heartbeat IC version 3.0 Jan 24 00:43:17.239897 kernel: hv_utils: Shutdown IC version 3.2 Jan 24 00:43:17.241524 kernel: hv_utils: TimeSync IC version 4.0 Jan 24 00:43:17.244485 kernel: scsi host0: storvsc_host_t Jan 24 00:43:17.665857 systemd-resolved[220]: Clock change detected. Flushing caches. Jan 24 00:43:17.671343 kernel: scsi host1: storvsc_host_t Jan 24 00:43:17.676356 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 24 00:43:17.684423 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 24 00:43:17.686576 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:43:17.712653 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 24 00:43:17.712896 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:43:17.714393 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 24 00:43:17.725849 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 24 00:43:17.726095 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 24 00:43:17.726254 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:43:17.728458 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 24 00:43:17.732276 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 24 00:43:17.740390 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:43:17.743349 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:43:17.783733 kernel: hv_netvsc 7c1e522d-7db2-7c1e-522d-7db27c1e522d eth0: VF slot 1 added Jan 24 00:43:17.792344 kernel: hv_vmbus: registering driver hv_pci Jan 24 00:43:17.792371 kernel: hv_pci 8790bb89-937b-41cf-a27e-ce0346748335: PCI VMBus probing: Using version 0x10004 Jan 24 00:43:17.802283 kernel: hv_pci 8790bb89-937b-41cf-a27e-ce0346748335: PCI host bridge to bus 937b:00 Jan 24 00:43:17.802541 kernel: pci_bus 937b:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 24 00:43:17.805461 kernel: pci_bus 937b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 24 00:43:17.810452 kernel: pci 937b:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 24 00:43:17.815345 kernel: pci 937b:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:43:17.819425 kernel: pci 937b:00:02.0: enabling Extended Tags Jan 24 00:43:17.829676 kernel: pci 937b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 937b:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 24 00:43:17.836781 kernel: pci_bus 937b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 24 00:43:17.837014 kernel: pci 937b:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:43:18.002966 kernel: mlx5_core 937b:00:02.0: enabling device (0000 -> 0002) Jan 24 00:43:18.007351 kernel: mlx5_core 937b:00:02.0: firmware version: 14.30.5026 Jan 24 00:43:18.216369 kernel: hv_netvsc 7c1e522d-7db2-7c1e-522d-7db27c1e522d eth0: VF registering: eth1 Jan 24 00:43:18.216596 kernel: mlx5_core 937b:00:02.0 eth1: joined to eth0 Jan 24 00:43:18.222826 kernel: mlx5_core 937b:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 24 00:43:18.230355 kernel: mlx5_core 937b:00:02.0 enP37755s1: renamed from eth1 Jan 24 00:43:18.313346 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (439) Jan 24 00:43:18.328442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 24 00:43:18.364934 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 24 00:43:18.379185 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 24 00:43:18.394505 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (452) Jan 24 00:43:18.419081 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 24 00:43:18.422693 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 24 00:43:18.439469 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:43:18.454343 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:43:18.464345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:43:18.471346 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:43:19.473347 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:43:19.473708 disk-uuid[602]: The operation has completed successfully. Jan 24 00:43:19.552021 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:43:19.552133 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:43:19.581516 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:43:19.590713 sh[715]: Success Jan 24 00:43:19.623429 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:43:19.923206 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:43:19.940436 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:43:19.945777 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:43:19.965690 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:43:19.965751 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:43:19.969395 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:43:19.972206 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:43:19.974652 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:43:20.358654 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:43:20.364387 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:43:20.374493 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:43:20.380618 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:43:20.398768 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:43:20.398823 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:43:20.400587 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:43:20.441359 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:43:20.451962 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:43:20.458181 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:43:20.469289 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:43:20.481522 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:43:20.496438 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:43:20.505520 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:43:20.527226 systemd-networkd[899]: lo: Link UP Jan 24 00:43:20.527236 systemd-networkd[899]: lo: Gained carrier Jan 24 00:43:20.529280 systemd-networkd[899]: Enumeration completed Jan 24 00:43:20.529538 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:43:20.531990 systemd[1]: Reached target network.target - Network. Jan 24 00:43:20.533292 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:43:20.533296 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:43:20.601353 kernel: mlx5_core 937b:00:02.0 enP37755s1: Link up Jan 24 00:43:20.634420 kernel: hv_netvsc 7c1e522d-7db2-7c1e-522d-7db27c1e522d eth0: Data path switched to VF: enP37755s1 Jan 24 00:43:20.634694 systemd-networkd[899]: enP37755s1: Link UP Jan 24 00:43:20.634829 systemd-networkd[899]: eth0: Link UP Jan 24 00:43:20.634997 systemd-networkd[899]: eth0: Gained carrier Jan 24 00:43:20.635010 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:43:20.639507 systemd-networkd[899]: enP37755s1: Gained carrier Jan 24 00:43:20.671379 systemd-networkd[899]: eth0: DHCPv4 address 10.200.4.34/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:43:21.770920 ignition[876]: Ignition 2.19.0 Jan 24 00:43:21.770932 ignition[876]: Stage: fetch-offline Jan 24 00:43:21.770973 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:21.776492 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:43:21.770985 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:21.771092 ignition[876]: parsed url from cmdline: "" Jan 24 00:43:21.771097 ignition[876]: no config URL provided Jan 24 00:43:21.771104 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:43:21.771113 ignition[876]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:43:21.791537 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:43:21.771121 ignition[876]: failed to fetch config: resource requires networking Jan 24 00:43:21.774385 ignition[876]: Ignition finished successfully Jan 24 00:43:21.807682 ignition[908]: Ignition 2.19.0 Jan 24 00:43:21.807693 ignition[908]: Stage: fetch Jan 24 00:43:21.807896 ignition[908]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:21.807909 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:21.808007 ignition[908]: parsed url from cmdline: "" Jan 24 00:43:21.808010 ignition[908]: no config URL provided Jan 24 00:43:21.808015 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:43:21.809594 ignition[908]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:43:21.809619 ignition[908]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 24 00:43:21.967516 ignition[908]: GET result: OK Jan 24 00:43:21.967637 ignition[908]: config has been read from IMDS userdata Jan 24 00:43:21.967689 ignition[908]: parsing config with SHA512: 9924dfe27864a6247a3267e2436ae6b9337c883dd79003bd88540360078cf1d5e3b35cafccffd402079531c781daf3ec95dccae54cce243467bc4cc6d8de8399 Jan 24 00:43:21.976993 unknown[908]: fetched base config from "system" Jan 24 00:43:21.979230 unknown[908]: fetched base config from "system" Jan 24 00:43:21.979238 unknown[908]: fetched user config from "azure" Jan 24 00:43:21.979670 ignition[908]: fetch: fetch complete Jan 24 00:43:21.979676 ignition[908]: fetch: fetch passed Jan 24 00:43:21.979720 ignition[908]: Ignition finished successfully Jan 24 00:43:21.990742 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:43:22.000592 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:43:22.015979 ignition[914]: Ignition 2.19.0 Jan 24 00:43:22.015989 ignition[914]: Stage: kargs Jan 24 00:43:22.018673 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:43:22.016228 ignition[914]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:22.016239 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:22.017135 ignition[914]: kargs: kargs passed Jan 24 00:43:22.017174 ignition[914]: Ignition finished successfully Jan 24 00:43:22.039456 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:43:22.055815 ignition[920]: Ignition 2.19.0 Jan 24 00:43:22.055827 ignition[920]: Stage: disks Jan 24 00:43:22.056116 ignition[920]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:22.056127 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:22.062256 ignition[920]: disks: disks passed Jan 24 00:43:22.063554 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:43:22.062302 ignition[920]: Ignition finished successfully Jan 24 00:43:22.068250 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:43:22.072447 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:43:22.075868 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:43:22.080918 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:43:22.083611 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:43:22.102524 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:43:22.177114 systemd-fsck[928]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 24 00:43:22.182377 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:43:22.197783 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:43:22.246612 systemd-networkd[899]: eth0: Gained IPv6LL Jan 24 00:43:22.290548 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:43:22.291127 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:43:22.296311 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:43:22.347426 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:43:22.363623 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (939) Jan 24 00:43:22.363703 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:43:22.366565 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:43:22.369179 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:43:22.379618 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:43:22.377428 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:43:22.383061 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 24 00:43:22.386136 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:43:22.386169 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:43:22.402318 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:43:22.407192 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:43:22.416476 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:43:23.249179 coreos-metadata[956]: Jan 24 00:43:23.249 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 24 00:43:23.255261 coreos-metadata[956]: Jan 24 00:43:23.255 INFO Fetch successful Jan 24 00:43:23.258212 coreos-metadata[956]: Jan 24 00:43:23.255 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 24 00:43:23.263622 coreos-metadata[956]: Jan 24 00:43:23.263 INFO Fetch successful Jan 24 00:43:23.263622 coreos-metadata[956]: Jan 24 00:43:23.263 INFO wrote hostname ci-4081.3.6-n-e69c55f9b7 to /sysroot/etc/hostname Jan 24 00:43:23.265113 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:43:23.279269 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:43:23.335291 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:43:23.341579 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:43:23.346774 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:43:24.306482 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:43:24.316470 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:43:24.320215 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:43:24.334960 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:43:24.341406 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:43:24.362308 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:43:24.373224 ignition[1058]: INFO : Ignition 2.19.0 Jan 24 00:43:24.373224 ignition[1058]: INFO : Stage: mount Jan 24 00:43:24.381021 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:24.381021 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:24.381021 ignition[1058]: INFO : mount: mount passed Jan 24 00:43:24.381021 ignition[1058]: INFO : Ignition finished successfully Jan 24 00:43:24.375168 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:43:24.385424 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:43:24.399510 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:43:24.418337 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1070) Jan 24 00:43:24.422346 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:43:24.422379 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:43:24.427352 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:43:24.434340 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:43:24.436140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:43:24.464786 ignition[1087]: INFO : Ignition 2.19.0 Jan 24 00:43:24.464786 ignition[1087]: INFO : Stage: files Jan 24 00:43:24.469277 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:24.469277 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:24.469277 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:43:24.469277 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:43:24.469277 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:43:24.574220 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:43:24.578217 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:43:24.582390 unknown[1087]: wrote ssh authorized keys file for user: core Jan 24 00:43:24.585206 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:43:24.599789 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:43:24.604670 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:43:24.648225 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:43:24.713028 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:43:24.718224 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:43:24.722837 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:43:24.727778 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:43:24.732478 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:43:24.737118 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:43:24.741785 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:43:24.746481 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:43:25.157719 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:43:25.324864 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:43:25.324864 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:43:25.360273 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:43:25.366791 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:43:25.366791 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:43:25.375701 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:43:25.375701 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:43:25.383305 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:43:25.387813 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:43:25.387813 ignition[1087]: INFO : files: files passed Jan 24 00:43:25.387813 ignition[1087]: INFO : Ignition finished successfully Jan 24 00:43:25.386019 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:43:25.406511 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:43:25.413051 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:43:25.416251 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:43:25.417402 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:43:25.433172 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:43:25.433172 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:43:25.438113 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:43:25.436612 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:43:25.439894 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:43:25.462555 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:43:25.490456 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:43:25.490572 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:43:25.500055 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:43:25.502793 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:43:25.505756 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:43:25.520517 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:43:25.535625 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:43:25.547470 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:43:25.563070 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:43:25.563263 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:43:25.564473 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:43:25.564869 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:43:25.565003 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:43:25.565845 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:43:25.566412 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:43:25.566836 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:43:25.567276 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:43:25.567718 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:43:25.568203 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:43:25.568665 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:43:25.569120 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:43:25.569537 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:43:25.570033 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:43:25.570899 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:43:25.571027 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:43:25.571816 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:43:25.572300 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:43:25.572708 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:43:25.611108 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:43:25.667645 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:43:25.667868 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:43:25.673912 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:43:25.674058 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:43:25.685585 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:43:25.685722 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:43:25.693528 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 24 00:43:25.693695 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:43:25.705538 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:43:25.711015 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:43:25.711155 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:43:25.721841 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:43:25.727128 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:43:25.727313 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:43:25.731078 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:43:25.740549 ignition[1139]: INFO : Ignition 2.19.0 Jan 24 00:43:25.740549 ignition[1139]: INFO : Stage: umount Jan 24 00:43:25.740549 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:25.740549 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:25.740549 ignition[1139]: INFO : umount: umount passed Jan 24 00:43:25.740549 ignition[1139]: INFO : Ignition finished successfully Jan 24 00:43:25.731697 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:43:25.756721 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:43:25.757681 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:43:25.766585 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:43:25.766678 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:43:25.781064 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:43:25.781121 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:43:25.783769 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:43:25.783840 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:43:25.788942 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:43:25.788990 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:43:25.791601 systemd[1]: Stopped target network.target - Network. Jan 24 00:43:25.796386 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:43:25.796443 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:43:25.816511 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:43:25.821382 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:43:25.826456 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:43:25.830114 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:43:25.838848 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:43:25.841488 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:43:25.841523 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:43:25.844126 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:43:25.844169 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:43:25.844273 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:43:25.844316 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:43:25.844744 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:43:25.844778 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:43:25.845298 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:43:25.846116 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:43:25.847556 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:43:25.870392 systemd-networkd[899]: eth0: DHCPv6 lease lost Jan 24 00:43:25.872737 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:43:25.872843 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:43:25.879553 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:43:25.879651 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:43:25.886554 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:43:25.886626 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:43:25.909466 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:43:25.912793 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:43:25.912853 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:43:25.921679 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:43:25.921730 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:43:25.924572 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:43:25.924622 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:43:25.946444 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:43:25.946504 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:43:25.955673 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:43:25.980740 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:43:25.980899 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:43:25.988048 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:43:25.988113 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:43:25.994079 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:43:25.994119 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:43:26.010857 kernel: hv_netvsc 7c1e522d-7db2-7c1e-522d-7db27c1e522d eth0: Data path switched from VF: enP37755s1 Jan 24 00:43:25.997017 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:43:26.010849 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:43:26.013659 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:43:26.013705 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:43:26.014259 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:43:26.014299 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:43:26.025537 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:43:26.033321 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:43:26.033430 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:43:26.036790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:43:26.036840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:26.045727 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:43:26.045822 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:43:26.052131 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:43:26.052209 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:43:26.705455 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:43:26.705603 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:43:26.708685 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:43:26.718149 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:43:26.718218 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:43:26.728567 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:43:26.766493 systemd[1]: Switching root. Jan 24 00:43:26.856929 systemd-journald[177]: Journal stopped Jan 24 00:43:16.088973 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 23 22:35:12 -00 2026 Jan 24 00:43:16.088999 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:43:16.089010 kernel: BIOS-provided physical RAM map: Jan 24 00:43:16.089016 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 24 00:43:16.089022 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 24 00:43:16.089030 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 24 00:43:16.089037 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 24 00:43:16.089046 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 24 00:43:16.089055 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 24 00:43:16.089061 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 24 00:43:16.089067 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 24 00:43:16.089075 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 24 00:43:16.089083 kernel: printk: bootconsole [earlyser0] enabled Jan 24 00:43:16.089089 kernel: NX (Execute Disable) protection: active Jan 24 00:43:16.089101 kernel: APIC: Static calls initialized Jan 24 00:43:16.089109 kernel: efi: EFI v2.7 by Microsoft Jan 24 00:43:16.089116 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Jan 24 00:43:16.089125 kernel: SMBIOS 3.1.0 present. Jan 24 00:43:16.089133 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 24 00:43:16.089139 kernel: Hypervisor detected: Microsoft Hyper-V Jan 24 00:43:16.089146 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 24 00:43:16.089155 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 Jan 24 00:43:16.089163 kernel: Hyper-V: Nested features: 0x1e0101 Jan 24 00:43:16.089170 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 24 00:43:16.089181 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 24 00:43:16.089188 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:43:16.089195 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 24 00:43:16.089205 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 24 00:43:16.089213 kernel: tsc: Detected 2593.907 MHz processor Jan 24 00:43:16.089220 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 24 00:43:16.089230 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 24 00:43:16.089238 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 24 00:43:16.089245 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 24 00:43:16.089257 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 24 00:43:16.089264 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 24 00:43:16.089274 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 24 00:43:16.089281 kernel: Using GB pages for direct mapping Jan 24 00:43:16.089287 kernel: Secure boot disabled Jan 24 00:43:16.089297 kernel: ACPI: Early table checksum verification disabled Jan 24 00:43:16.089305 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 24 00:43:16.089318 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089328 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089336 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 24 00:43:16.089346 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 24 00:43:16.089354 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089362 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089372 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089382 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089392 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089400 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089407 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 24 00:43:16.089418 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 24 00:43:16.089425 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 24 00:43:16.089433 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 24 00:43:16.089443 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 24 00:43:16.089453 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 24 00:43:16.089461 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 24 00:43:16.089471 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 24 00:43:16.089478 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 24 00:43:16.089487 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 24 00:43:16.089496 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 24 00:43:16.089503 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 24 00:43:16.089512 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 24 00:43:16.089521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 24 00:43:16.089531 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 24 00:43:16.089541 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 24 00:43:16.089549 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 24 00:43:16.089556 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 24 00:43:16.089567 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 24 00:43:16.089574 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 24 00:43:16.089581 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 24 00:43:16.089592 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 24 00:43:16.089599 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 24 00:43:16.089611 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 24 00:43:16.089620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 24 00:43:16.089627 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 24 00:43:16.089636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 24 00:43:16.089645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 24 00:43:16.089652 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 24 00:43:16.089662 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 24 00:43:16.089670 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 24 00:43:16.089688 kernel: Zone ranges: Jan 24 00:43:16.089699 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 24 00:43:16.089710 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 24 00:43:16.089717 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:43:16.089726 kernel: Movable zone start for each node Jan 24 00:43:16.089735 kernel: Early memory node ranges Jan 24 00:43:16.089742 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 24 00:43:16.089752 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 24 00:43:16.089760 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 24 00:43:16.089768 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 24 00:43:16.089780 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 24 00:43:16.089788 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 24 00:43:16.089795 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 24 00:43:16.089806 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 24 00:43:16.089813 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 24 00:43:16.089821 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 24 00:43:16.089831 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 24 00:43:16.089838 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 24 00:43:16.089847 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 24 00:43:16.089858 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 24 00:43:16.089866 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 24 00:43:16.089876 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 24 00:43:16.089884 kernel: Booting paravirtualized kernel on Hyper-V Jan 24 00:43:16.089892 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 24 00:43:16.089902 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 24 00:43:16.089909 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 24 00:43:16.089917 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 24 00:43:16.089927 kernel: pcpu-alloc: [0] 0 1 Jan 24 00:43:16.089936 kernel: Hyper-V: PV spinlocks enabled Jan 24 00:43:16.089945 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 24 00:43:16.089955 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:43:16.089963 kernel: random: crng init done Jan 24 00:43:16.089972 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 24 00:43:16.089981 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 24 00:43:16.089988 kernel: Fallback order for Node 0: 0 Jan 24 00:43:16.089998 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 24 00:43:16.090009 kernel: Policy zone: Normal Jan 24 00:43:16.090026 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 24 00:43:16.090034 kernel: software IO TLB: area num 2. Jan 24 00:43:16.090045 kernel: Memory: 8077080K/8387460K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 310120K reserved, 0K cma-reserved) Jan 24 00:43:16.090056 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 24 00:43:16.090063 kernel: ftrace: allocating 37989 entries in 149 pages Jan 24 00:43:16.090072 kernel: ftrace: allocated 149 pages with 4 groups Jan 24 00:43:16.090082 kernel: Dynamic Preempt: voluntary Jan 24 00:43:16.090093 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 24 00:43:16.090102 kernel: rcu: RCU event tracing is enabled. Jan 24 00:43:16.090114 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 24 00:43:16.090124 kernel: Trampoline variant of Tasks RCU enabled. Jan 24 00:43:16.090131 kernel: Rude variant of Tasks RCU enabled. Jan 24 00:43:16.090142 kernel: Tracing variant of Tasks RCU enabled. Jan 24 00:43:16.090150 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 24 00:43:16.090158 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 24 00:43:16.090171 kernel: Using NULL legacy PIC Jan 24 00:43:16.090179 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 24 00:43:16.090190 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 24 00:43:16.090198 kernel: Console: colour dummy device 80x25 Jan 24 00:43:16.090206 kernel: printk: console [tty1] enabled Jan 24 00:43:16.090217 kernel: printk: console [ttyS0] enabled Jan 24 00:43:16.090224 kernel: printk: bootconsole [earlyser0] disabled Jan 24 00:43:16.090233 kernel: ACPI: Core revision 20230628 Jan 24 00:43:16.090243 kernel: Failed to register legacy timer interrupt Jan 24 00:43:16.090251 kernel: APIC: Switch to symmetric I/O mode setup Jan 24 00:43:16.090264 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 24 00:43:16.090272 kernel: Hyper-V: Using IPI hypercalls Jan 24 00:43:16.090280 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 24 00:43:16.090291 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 24 00:43:16.090299 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 24 00:43:16.090309 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 24 00:43:16.090318 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 24 00:43:16.090326 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 24 00:43:16.090337 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 24 00:43:16.090347 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 24 00:43:16.090356 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 24 00:43:16.090366 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 24 00:43:16.090374 kernel: Spectre V2 : Mitigation: Retpolines Jan 24 00:43:16.090384 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 24 00:43:16.090393 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 24 00:43:16.090401 kernel: RETBleed: Vulnerable Jan 24 00:43:16.090412 kernel: Speculative Store Bypass: Vulnerable Jan 24 00:43:16.090419 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:43:16.090427 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 24 00:43:16.090440 kernel: active return thunk: its_return_thunk Jan 24 00:43:16.090448 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 24 00:43:16.090458 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 24 00:43:16.090466 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 24 00:43:16.090474 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 24 00:43:16.090485 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 24 00:43:16.090494 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 24 00:43:16.090504 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 24 00:43:16.090512 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 24 00:43:16.090522 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 24 00:43:16.090530 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 24 00:43:16.090541 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 24 00:43:16.090552 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 24 00:43:16.090559 kernel: Freeing SMP alternatives memory: 32K Jan 24 00:43:16.090569 kernel: pid_max: default: 32768 minimum: 301 Jan 24 00:43:16.090578 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 24 00:43:16.090586 kernel: landlock: Up and running. Jan 24 00:43:16.090596 kernel: SELinux: Initializing. Jan 24 00:43:16.090604 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:43:16.090612 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 24 00:43:16.090623 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 24 00:43:16.090632 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:43:16.090644 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:43:16.090655 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 24 00:43:16.090664 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 24 00:43:16.090686 kernel: signal: max sigframe size: 3632 Jan 24 00:43:16.090703 kernel: rcu: Hierarchical SRCU implementation. Jan 24 00:43:16.090723 kernel: rcu: Max phase no-delay instances is 400. Jan 24 00:43:16.090741 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 24 00:43:16.090757 kernel: smp: Bringing up secondary CPUs ... Jan 24 00:43:16.090773 kernel: smpboot: x86: Booting SMP configuration: Jan 24 00:43:16.090799 kernel: .... node #0, CPUs: #1 Jan 24 00:43:16.090814 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 24 00:43:16.090830 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 24 00:43:16.090844 kernel: smp: Brought up 1 node, 2 CPUs Jan 24 00:43:16.090860 kernel: smpboot: Max logical packages: 1 Jan 24 00:43:16.090876 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 24 00:43:16.090895 kernel: devtmpfs: initialized Jan 24 00:43:16.090912 kernel: x86/mm: Memory block size: 128MB Jan 24 00:43:16.090938 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 24 00:43:16.090956 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 24 00:43:16.090971 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 24 00:43:16.090987 kernel: pinctrl core: initialized pinctrl subsystem Jan 24 00:43:16.091004 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 24 00:43:16.091025 kernel: audit: initializing netlink subsys (disabled) Jan 24 00:43:16.091041 kernel: audit: type=2000 audit(1769215395.028:1): state=initialized audit_enabled=0 res=1 Jan 24 00:43:16.091057 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 24 00:43:16.091071 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 24 00:43:16.091094 kernel: cpuidle: using governor menu Jan 24 00:43:16.091109 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 24 00:43:16.091124 kernel: dca service started, version 1.12.1 Jan 24 00:43:16.091140 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 24 00:43:16.091156 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 24 00:43:16.091174 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 24 00:43:16.091190 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 24 00:43:16.091208 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 24 00:43:16.091223 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 24 00:43:16.091244 kernel: ACPI: Added _OSI(Module Device) Jan 24 00:43:16.091262 kernel: ACPI: Added _OSI(Processor Device) Jan 24 00:43:16.091281 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 24 00:43:16.091298 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 24 00:43:16.091314 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 24 00:43:16.091330 kernel: ACPI: Interpreter enabled Jan 24 00:43:16.091346 kernel: ACPI: PM: (supports S0 S5) Jan 24 00:43:16.091363 kernel: ACPI: Using IOAPIC for interrupt routing Jan 24 00:43:16.091378 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 24 00:43:16.091398 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 24 00:43:16.091411 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 24 00:43:16.091424 kernel: iommu: Default domain type: Translated Jan 24 00:43:16.091448 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 24 00:43:16.091463 kernel: efivars: Registered efivars operations Jan 24 00:43:16.091478 kernel: PCI: Using ACPI for IRQ routing Jan 24 00:43:16.091493 kernel: PCI: System does not support PCI Jan 24 00:43:16.091506 kernel: vgaarb: loaded Jan 24 00:43:16.091521 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 24 00:43:16.091540 kernel: VFS: Disk quotas dquot_6.6.0 Jan 24 00:43:16.091553 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 24 00:43:16.091568 kernel: pnp: PnP ACPI init Jan 24 00:43:16.091583 kernel: pnp: PnP ACPI: found 3 devices Jan 24 00:43:16.091598 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 24 00:43:16.091613 kernel: NET: Registered PF_INET protocol family Jan 24 00:43:16.091628 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 24 00:43:16.091643 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 24 00:43:16.091658 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 24 00:43:16.091683 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 24 00:43:16.091699 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 24 00:43:16.091714 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 24 00:43:16.091729 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:43:16.091744 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 24 00:43:16.091759 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 24 00:43:16.091774 kernel: NET: Registered PF_XDP protocol family Jan 24 00:43:16.091789 kernel: PCI: CLS 0 bytes, default 64 Jan 24 00:43:16.091804 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 24 00:43:16.091822 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Jan 24 00:43:16.091837 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 24 00:43:16.091852 kernel: Initialise system trusted keyrings Jan 24 00:43:16.091866 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 24 00:43:16.091885 kernel: Key type asymmetric registered Jan 24 00:43:16.091900 kernel: Asymmetric key parser 'x509' registered Jan 24 00:43:16.091913 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 24 00:43:16.091928 kernel: io scheduler mq-deadline registered Jan 24 00:43:16.091943 kernel: io scheduler kyber registered Jan 24 00:43:16.091960 kernel: io scheduler bfq registered Jan 24 00:43:16.091975 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 24 00:43:16.091990 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 24 00:43:16.092005 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 24 00:43:16.092020 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 24 00:43:16.092035 kernel: i8042: PNP: No PS/2 controller found. Jan 24 00:43:16.092218 kernel: rtc_cmos 00:02: registered as rtc0 Jan 24 00:43:16.092346 kernel: rtc_cmos 00:02: setting system clock to 2026-01-24T00:43:15 UTC (1769215395) Jan 24 00:43:16.092469 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 24 00:43:16.092488 kernel: intel_pstate: CPU model not supported Jan 24 00:43:16.092503 kernel: efifb: probing for efifb Jan 24 00:43:16.092518 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 24 00:43:16.092533 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 24 00:43:16.092548 kernel: efifb: scrolling: redraw Jan 24 00:43:16.092563 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 24 00:43:16.092579 kernel: Console: switching to colour frame buffer device 128x48 Jan 24 00:43:16.092594 kernel: fb0: EFI VGA frame buffer device Jan 24 00:43:16.092612 kernel: pstore: Using crash dump compression: deflate Jan 24 00:43:16.092627 kernel: pstore: Registered efi_pstore as persistent store backend Jan 24 00:43:16.092641 kernel: NET: Registered PF_INET6 protocol family Jan 24 00:43:16.092656 kernel: Segment Routing with IPv6 Jan 24 00:43:16.092671 kernel: In-situ OAM (IOAM) with IPv6 Jan 24 00:43:16.092707 kernel: NET: Registered PF_PACKET protocol family Jan 24 00:43:16.092722 kernel: Key type dns_resolver registered Jan 24 00:43:16.092736 kernel: IPI shorthand broadcast: enabled Jan 24 00:43:16.092751 kernel: sched_clock: Marking stable (836099100, 46759600)->(1089834600, -206975900) Jan 24 00:43:16.092770 kernel: registered taskstats version 1 Jan 24 00:43:16.092784 kernel: Loading compiled-in X.509 certificates Jan 24 00:43:16.092799 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 6e114855f6cf7a40074d93a4383c22d00e384634' Jan 24 00:43:16.092813 kernel: Key type .fscrypt registered Jan 24 00:43:16.092828 kernel: Key type fscrypt-provisioning registered Jan 24 00:43:16.092843 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 24 00:43:16.092858 kernel: ima: Allocated hash algorithm: sha1 Jan 24 00:43:16.092873 kernel: ima: No architecture policies found Jan 24 00:43:16.092888 kernel: clk: Disabling unused clocks Jan 24 00:43:16.092906 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 24 00:43:16.092921 kernel: Write protecting the kernel read-only data: 36864k Jan 24 00:43:16.092936 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 24 00:43:16.092951 kernel: Run /init as init process Jan 24 00:43:16.092966 kernel: with arguments: Jan 24 00:43:16.092981 kernel: /init Jan 24 00:43:16.092995 kernel: with environment: Jan 24 00:43:16.093010 kernel: HOME=/ Jan 24 00:43:16.093024 kernel: TERM=linux Jan 24 00:43:16.093044 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:43:16.093063 systemd[1]: Detected virtualization microsoft. Jan 24 00:43:16.093079 systemd[1]: Detected architecture x86-64. Jan 24 00:43:16.093094 systemd[1]: Running in initrd. Jan 24 00:43:16.093110 systemd[1]: No hostname configured, using default hostname. Jan 24 00:43:16.093125 systemd[1]: Hostname set to . Jan 24 00:43:16.093141 systemd[1]: Initializing machine ID from random generator. Jan 24 00:43:16.093160 systemd[1]: Queued start job for default target initrd.target. Jan 24 00:43:16.093175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:43:16.093191 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:43:16.093208 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 24 00:43:16.093224 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:43:16.093239 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 24 00:43:16.093256 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 24 00:43:16.093277 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 24 00:43:16.093293 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 24 00:43:16.093309 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:43:16.093325 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:43:16.093341 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:43:16.093356 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:43:16.093372 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:43:16.093388 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:43:16.093407 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:43:16.093423 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:43:16.093439 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 24 00:43:16.093454 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 24 00:43:16.093471 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:43:16.093487 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:43:16.093503 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:43:16.093519 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:43:16.093535 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 24 00:43:16.093553 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:43:16.093569 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 24 00:43:16.093585 systemd[1]: Starting systemd-fsck-usr.service... Jan 24 00:43:16.093601 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:43:16.093617 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:43:16.093633 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:43:16.093672 systemd-journald[177]: Collecting audit messages is disabled. Jan 24 00:43:16.093716 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 24 00:43:16.093732 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:43:16.093748 systemd-journald[177]: Journal started Jan 24 00:43:16.093781 systemd-journald[177]: Runtime Journal (/run/log/journal/71351a49e5b441f6b3971c7bde900b42) is 8.0M, max 158.8M, 150.8M free. Jan 24 00:43:16.082038 systemd-modules-load[178]: Inserted module 'overlay' Jan 24 00:43:16.108943 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:43:16.106479 systemd[1]: Finished systemd-fsck-usr.service. Jan 24 00:43:16.117872 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 24 00:43:16.130357 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 24 00:43:16.135902 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:43:16.146383 kernel: Bridge firewalling registered Jan 24 00:43:16.140080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:16.143351 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 24 00:43:16.149629 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:43:16.158170 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 24 00:43:16.165639 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:43:16.174869 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:43:16.185810 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:43:16.197486 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:43:16.204453 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:43:16.217825 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 24 00:43:16.223414 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:43:16.226889 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:43:16.241746 dracut-cmdline[209]: dracut-dracut-053 Jan 24 00:43:16.235999 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:43:16.250190 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=f12f29e7047038507ed004e06b4d83ecb1c13bb716c56fe3c96b88d906e8eff2 Jan 24 00:43:16.300008 systemd-resolved[220]: Positive Trust Anchors: Jan 24 00:43:16.300021 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:43:16.300071 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:43:16.328412 systemd-resolved[220]: Defaulting to hostname 'linux'. Jan 24 00:43:16.332086 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:43:16.339411 kernel: SCSI subsystem initialized Jan 24 00:43:16.339566 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:43:16.351694 kernel: Loading iSCSI transport class v2.0-870. Jan 24 00:43:16.362700 kernel: iscsi: registered transport (tcp) Jan 24 00:43:16.384116 kernel: iscsi: registered transport (qla4xxx) Jan 24 00:43:16.384167 kernel: QLogic iSCSI HBA Driver Jan 24 00:43:16.419386 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 24 00:43:16.431990 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 24 00:43:16.465116 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 24 00:43:16.465176 kernel: device-mapper: uevent: version 1.0.3 Jan 24 00:43:16.468498 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 24 00:43:16.507697 kernel: raid6: avx512x4 gen() 18279 MB/s Jan 24 00:43:16.527695 kernel: raid6: avx512x2 gen() 18380 MB/s Jan 24 00:43:16.546687 kernel: raid6: avx512x1 gen() 18386 MB/s Jan 24 00:43:16.565688 kernel: raid6: avx2x4 gen() 18224 MB/s Jan 24 00:43:16.584694 kernel: raid6: avx2x2 gen() 18270 MB/s Jan 24 00:43:16.605169 kernel: raid6: avx2x1 gen() 13992 MB/s Jan 24 00:43:16.605207 kernel: raid6: using algorithm avx512x1 gen() 18386 MB/s Jan 24 00:43:16.627114 kernel: raid6: .... xor() 26859 MB/s, rmw enabled Jan 24 00:43:16.627162 kernel: raid6: using avx512x2 recovery algorithm Jan 24 00:43:16.649700 kernel: xor: automatically using best checksumming function avx Jan 24 00:43:16.796705 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 24 00:43:16.805736 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:43:16.816829 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:43:16.829173 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 24 00:43:16.833620 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:43:16.853178 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 24 00:43:16.867714 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 24 00:43:16.894186 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:43:16.910825 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:43:16.951018 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:43:16.962906 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 24 00:43:16.992339 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 24 00:43:17.003440 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:43:17.006841 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:43:17.010248 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:43:17.025593 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 24 00:43:17.045518 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:43:17.059706 kernel: hv_vmbus: Vmbus version:5.2 Jan 24 00:43:17.067838 kernel: cryptd: max_cpu_qlen set to 1000 Jan 24 00:43:17.081716 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 24 00:43:17.089695 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 24 00:43:17.094872 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:43:17.097767 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:43:17.101296 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:43:17.107475 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:43:17.107540 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:17.117138 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:43:17.141627 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 24 00:43:17.141654 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 24 00:43:17.141687 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 24 00:43:17.141798 kernel: hv_vmbus: registering driver hid_hyperv Jan 24 00:43:17.138952 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:43:17.150736 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 24 00:43:17.150760 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 24 00:43:17.167704 kernel: hv_vmbus: registering driver hv_netvsc Jan 24 00:43:17.167899 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:43:17.168079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:17.180516 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:43:17.202036 kernel: PTP clock support registered Jan 24 00:43:17.202094 kernel: AVX2 version of gcm_enc/dec engaged. Jan 24 00:43:17.207347 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:17.218811 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 24 00:43:17.230807 kernel: AES CTR mode by8 optimization enabled Jan 24 00:43:17.230845 kernel: hv_vmbus: registering driver hv_storvsc Jan 24 00:43:17.230857 kernel: hv_utils: Registering HyperV Utility Driver Jan 24 00:43:17.234637 kernel: hv_vmbus: registering driver hv_utils Jan 24 00:43:17.239858 kernel: hv_utils: Heartbeat IC version 3.0 Jan 24 00:43:17.239897 kernel: hv_utils: Shutdown IC version 3.2 Jan 24 00:43:17.241524 kernel: hv_utils: TimeSync IC version 4.0 Jan 24 00:43:17.244485 kernel: scsi host0: storvsc_host_t Jan 24 00:43:17.665857 systemd-resolved[220]: Clock change detected. Flushing caches. Jan 24 00:43:17.671343 kernel: scsi host1: storvsc_host_t Jan 24 00:43:17.676356 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 24 00:43:17.684423 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 24 00:43:17.686576 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:43:17.712653 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 24 00:43:17.712896 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 24 00:43:17.714393 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 24 00:43:17.725849 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 24 00:43:17.726095 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 24 00:43:17.726254 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 24 00:43:17.728458 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 24 00:43:17.732276 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 24 00:43:17.740390 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:43:17.743349 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 24 00:43:17.783733 kernel: hv_netvsc 7c1e522d-7db2-7c1e-522d-7db27c1e522d eth0: VF slot 1 added Jan 24 00:43:17.792344 kernel: hv_vmbus: registering driver hv_pci Jan 24 00:43:17.792371 kernel: hv_pci 8790bb89-937b-41cf-a27e-ce0346748335: PCI VMBus probing: Using version 0x10004 Jan 24 00:43:17.802283 kernel: hv_pci 8790bb89-937b-41cf-a27e-ce0346748335: PCI host bridge to bus 937b:00 Jan 24 00:43:17.802541 kernel: pci_bus 937b:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 24 00:43:17.805461 kernel: pci_bus 937b:00: No busn resource found for root bus, will use [bus 00-ff] Jan 24 00:43:17.810452 kernel: pci 937b:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 24 00:43:17.815345 kernel: pci 937b:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:43:17.819425 kernel: pci 937b:00:02.0: enabling Extended Tags Jan 24 00:43:17.829676 kernel: pci 937b:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 937b:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 24 00:43:17.836781 kernel: pci_bus 937b:00: busn_res: [bus 00-ff] end is updated to 00 Jan 24 00:43:17.837014 kernel: pci 937b:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 24 00:43:18.002966 kernel: mlx5_core 937b:00:02.0: enabling device (0000 -> 0002) Jan 24 00:43:18.007351 kernel: mlx5_core 937b:00:02.0: firmware version: 14.30.5026 Jan 24 00:43:18.216369 kernel: hv_netvsc 7c1e522d-7db2-7c1e-522d-7db27c1e522d eth0: VF registering: eth1 Jan 24 00:43:18.216596 kernel: mlx5_core 937b:00:02.0 eth1: joined to eth0 Jan 24 00:43:18.222826 kernel: mlx5_core 937b:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 24 00:43:18.230355 kernel: mlx5_core 937b:00:02.0 enP37755s1: renamed from eth1 Jan 24 00:43:18.313346 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (439) Jan 24 00:43:18.328442 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 24 00:43:18.364934 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 24 00:43:18.379185 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 24 00:43:18.394505 kernel: BTRFS: device fsid b9d3569e-180c-420c-96ec-490d7c970b80 devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (452) Jan 24 00:43:18.419081 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 24 00:43:18.422693 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 24 00:43:18.439469 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 24 00:43:18.454343 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:43:18.464345 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:43:18.471346 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:43:19.473347 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 24 00:43:19.473708 disk-uuid[602]: The operation has completed successfully. Jan 24 00:43:19.552021 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 24 00:43:19.552133 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 24 00:43:19.581516 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 24 00:43:19.590713 sh[715]: Success Jan 24 00:43:19.623429 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 24 00:43:19.923206 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 24 00:43:19.940436 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 24 00:43:19.945777 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 24 00:43:19.965690 kernel: BTRFS info (device dm-0): first mount of filesystem b9d3569e-180c-420c-96ec-490d7c970b80 Jan 24 00:43:19.965751 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:43:19.969395 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 24 00:43:19.972206 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 24 00:43:19.974652 kernel: BTRFS info (device dm-0): using free space tree Jan 24 00:43:20.358654 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 24 00:43:20.364387 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 24 00:43:20.374493 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 24 00:43:20.380618 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 24 00:43:20.398768 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:43:20.398823 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:43:20.400587 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:43:20.441359 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:43:20.451962 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 24 00:43:20.458181 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:43:20.469289 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 24 00:43:20.481522 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 24 00:43:20.496438 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:43:20.505520 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:43:20.527226 systemd-networkd[899]: lo: Link UP Jan 24 00:43:20.527236 systemd-networkd[899]: lo: Gained carrier Jan 24 00:43:20.529280 systemd-networkd[899]: Enumeration completed Jan 24 00:43:20.529538 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:43:20.531990 systemd[1]: Reached target network.target - Network. Jan 24 00:43:20.533292 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:43:20.533296 systemd-networkd[899]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:43:20.601353 kernel: mlx5_core 937b:00:02.0 enP37755s1: Link up Jan 24 00:43:20.634420 kernel: hv_netvsc 7c1e522d-7db2-7c1e-522d-7db27c1e522d eth0: Data path switched to VF: enP37755s1 Jan 24 00:43:20.634694 systemd-networkd[899]: enP37755s1: Link UP Jan 24 00:43:20.634829 systemd-networkd[899]: eth0: Link UP Jan 24 00:43:20.634997 systemd-networkd[899]: eth0: Gained carrier Jan 24 00:43:20.635010 systemd-networkd[899]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:43:20.639507 systemd-networkd[899]: enP37755s1: Gained carrier Jan 24 00:43:20.671379 systemd-networkd[899]: eth0: DHCPv4 address 10.200.4.34/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:43:21.770920 ignition[876]: Ignition 2.19.0 Jan 24 00:43:21.770932 ignition[876]: Stage: fetch-offline Jan 24 00:43:21.770973 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:21.776492 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:43:21.770985 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:21.771092 ignition[876]: parsed url from cmdline: "" Jan 24 00:43:21.771097 ignition[876]: no config URL provided Jan 24 00:43:21.771104 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:43:21.771113 ignition[876]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:43:21.791537 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 24 00:43:21.771121 ignition[876]: failed to fetch config: resource requires networking Jan 24 00:43:21.774385 ignition[876]: Ignition finished successfully Jan 24 00:43:21.807682 ignition[908]: Ignition 2.19.0 Jan 24 00:43:21.807693 ignition[908]: Stage: fetch Jan 24 00:43:21.807896 ignition[908]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:21.807909 ignition[908]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:21.808007 ignition[908]: parsed url from cmdline: "" Jan 24 00:43:21.808010 ignition[908]: no config URL provided Jan 24 00:43:21.808015 ignition[908]: reading system config file "/usr/lib/ignition/user.ign" Jan 24 00:43:21.809594 ignition[908]: no config at "/usr/lib/ignition/user.ign" Jan 24 00:43:21.809619 ignition[908]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 24 00:43:21.967516 ignition[908]: GET result: OK Jan 24 00:43:21.967637 ignition[908]: config has been read from IMDS userdata Jan 24 00:43:21.967689 ignition[908]: parsing config with SHA512: 9924dfe27864a6247a3267e2436ae6b9337c883dd79003bd88540360078cf1d5e3b35cafccffd402079531c781daf3ec95dccae54cce243467bc4cc6d8de8399 Jan 24 00:43:21.976993 unknown[908]: fetched base config from "system" Jan 24 00:43:21.979230 unknown[908]: fetched base config from "system" Jan 24 00:43:21.979238 unknown[908]: fetched user config from "azure" Jan 24 00:43:21.979670 ignition[908]: fetch: fetch complete Jan 24 00:43:21.979676 ignition[908]: fetch: fetch passed Jan 24 00:43:21.979720 ignition[908]: Ignition finished successfully Jan 24 00:43:21.990742 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 24 00:43:22.000592 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 24 00:43:22.015979 ignition[914]: Ignition 2.19.0 Jan 24 00:43:22.015989 ignition[914]: Stage: kargs Jan 24 00:43:22.018673 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 24 00:43:22.016228 ignition[914]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:22.016239 ignition[914]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:22.017135 ignition[914]: kargs: kargs passed Jan 24 00:43:22.017174 ignition[914]: Ignition finished successfully Jan 24 00:43:22.039456 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 24 00:43:22.055815 ignition[920]: Ignition 2.19.0 Jan 24 00:43:22.055827 ignition[920]: Stage: disks Jan 24 00:43:22.056116 ignition[920]: no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:22.056127 ignition[920]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:22.062256 ignition[920]: disks: disks passed Jan 24 00:43:22.063554 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 24 00:43:22.062302 ignition[920]: Ignition finished successfully Jan 24 00:43:22.068250 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 24 00:43:22.072447 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 24 00:43:22.075868 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:43:22.080918 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:43:22.083611 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:43:22.102524 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 24 00:43:22.177114 systemd-fsck[928]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 24 00:43:22.182377 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 24 00:43:22.197783 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 24 00:43:22.246612 systemd-networkd[899]: eth0: Gained IPv6LL Jan 24 00:43:22.290548 kernel: EXT4-fs (sda9): mounted filesystem a752e1f1-ddf3-43b9-88e7-8cc533707c34 r/w with ordered data mode. Quota mode: none. Jan 24 00:43:22.291127 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 24 00:43:22.296311 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 24 00:43:22.347426 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:43:22.363623 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (939) Jan 24 00:43:22.363703 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:43:22.366565 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:43:22.369179 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:43:22.379618 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:43:22.377428 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 24 00:43:22.383061 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 24 00:43:22.386136 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 24 00:43:22.386169 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:43:22.402318 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:43:22.407192 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 24 00:43:22.416476 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 24 00:43:23.249179 coreos-metadata[956]: Jan 24 00:43:23.249 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 24 00:43:23.255261 coreos-metadata[956]: Jan 24 00:43:23.255 INFO Fetch successful Jan 24 00:43:23.258212 coreos-metadata[956]: Jan 24 00:43:23.255 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 24 00:43:23.263622 coreos-metadata[956]: Jan 24 00:43:23.263 INFO Fetch successful Jan 24 00:43:23.263622 coreos-metadata[956]: Jan 24 00:43:23.263 INFO wrote hostname ci-4081.3.6-n-e69c55f9b7 to /sysroot/etc/hostname Jan 24 00:43:23.265113 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:43:23.279269 initrd-setup-root[969]: cut: /sysroot/etc/passwd: No such file or directory Jan 24 00:43:23.335291 initrd-setup-root[976]: cut: /sysroot/etc/group: No such file or directory Jan 24 00:43:23.341579 initrd-setup-root[983]: cut: /sysroot/etc/shadow: No such file or directory Jan 24 00:43:23.346774 initrd-setup-root[990]: cut: /sysroot/etc/gshadow: No such file or directory Jan 24 00:43:24.306482 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 24 00:43:24.316470 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 24 00:43:24.320215 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 24 00:43:24.334960 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 24 00:43:24.341406 kernel: BTRFS info (device sda6): last unmount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:43:24.362308 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 24 00:43:24.373224 ignition[1058]: INFO : Ignition 2.19.0 Jan 24 00:43:24.373224 ignition[1058]: INFO : Stage: mount Jan 24 00:43:24.381021 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:24.381021 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:24.381021 ignition[1058]: INFO : mount: mount passed Jan 24 00:43:24.381021 ignition[1058]: INFO : Ignition finished successfully Jan 24 00:43:24.375168 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 24 00:43:24.385424 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 24 00:43:24.399510 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 24 00:43:24.418337 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1070) Jan 24 00:43:24.422346 kernel: BTRFS info (device sda6): first mount of filesystem 56b58288-41fa-4f43-bfdd-27464065c8e8 Jan 24 00:43:24.422379 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 24 00:43:24.427352 kernel: BTRFS info (device sda6): using free space tree Jan 24 00:43:24.434340 kernel: BTRFS info (device sda6): auto enabling async discard Jan 24 00:43:24.436140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 24 00:43:24.464786 ignition[1087]: INFO : Ignition 2.19.0 Jan 24 00:43:24.464786 ignition[1087]: INFO : Stage: files Jan 24 00:43:24.469277 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:24.469277 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:24.469277 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jan 24 00:43:24.469277 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 24 00:43:24.469277 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 24 00:43:24.574220 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 24 00:43:24.578217 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 24 00:43:24.582390 unknown[1087]: wrote ssh authorized keys file for user: core Jan 24 00:43:24.585206 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 24 00:43:24.599789 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:43:24.604670 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 24 00:43:24.648225 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 24 00:43:24.713028 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 24 00:43:24.718224 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 24 00:43:24.722837 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 24 00:43:24.727778 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:43:24.732478 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 24 00:43:24.737118 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:43:24.741785 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 24 00:43:24.746481 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:43:24.751466 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 24 00:43:25.157719 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 24 00:43:25.324864 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 24 00:43:25.324864 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 24 00:43:25.360273 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:43:25.366791 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 24 00:43:25.366791 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 24 00:43:25.375701 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 24 00:43:25.375701 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 24 00:43:25.383305 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:43:25.387813 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 24 00:43:25.387813 ignition[1087]: INFO : files: files passed Jan 24 00:43:25.387813 ignition[1087]: INFO : Ignition finished successfully Jan 24 00:43:25.386019 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 24 00:43:25.406511 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 24 00:43:25.413051 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 24 00:43:25.416251 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 24 00:43:25.417402 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 24 00:43:25.433172 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:43:25.433172 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:43:25.438113 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 24 00:43:25.436612 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:43:25.439894 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 24 00:43:25.462555 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 24 00:43:25.490456 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 24 00:43:25.490572 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 24 00:43:25.500055 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 24 00:43:25.502793 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 24 00:43:25.505756 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 24 00:43:25.520517 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 24 00:43:25.535625 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:43:25.547470 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 24 00:43:25.563070 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:43:25.563263 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:43:25.564473 systemd[1]: Stopped target timers.target - Timer Units. Jan 24 00:43:25.564869 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 24 00:43:25.565003 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 24 00:43:25.565845 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 24 00:43:25.566412 systemd[1]: Stopped target basic.target - Basic System. Jan 24 00:43:25.566836 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 24 00:43:25.567276 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 24 00:43:25.567718 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 24 00:43:25.568203 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 24 00:43:25.568665 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 24 00:43:25.569120 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 24 00:43:25.569537 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 24 00:43:25.570033 systemd[1]: Stopped target swap.target - Swaps. Jan 24 00:43:25.570899 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 24 00:43:25.571027 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 24 00:43:25.571816 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:43:25.572300 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:43:25.572708 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 24 00:43:25.611108 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:43:25.667645 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 24 00:43:25.667868 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 24 00:43:25.673912 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 24 00:43:25.674058 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 24 00:43:25.685585 systemd[1]: ignition-files.service: Deactivated successfully. Jan 24 00:43:25.685722 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 24 00:43:25.693528 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 24 00:43:25.693695 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 24 00:43:25.705538 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 24 00:43:25.711015 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 24 00:43:25.711155 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:43:25.721841 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 24 00:43:25.727128 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 24 00:43:25.727313 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:43:25.731078 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 24 00:43:25.740549 ignition[1139]: INFO : Ignition 2.19.0 Jan 24 00:43:25.740549 ignition[1139]: INFO : Stage: umount Jan 24 00:43:25.740549 ignition[1139]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 24 00:43:25.740549 ignition[1139]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 24 00:43:25.740549 ignition[1139]: INFO : umount: umount passed Jan 24 00:43:25.740549 ignition[1139]: INFO : Ignition finished successfully Jan 24 00:43:25.731697 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 24 00:43:25.756721 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 24 00:43:25.757681 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 24 00:43:25.766585 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 24 00:43:25.766678 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 24 00:43:25.781064 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 24 00:43:25.781121 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 24 00:43:25.783769 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 24 00:43:25.783840 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 24 00:43:25.788942 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 24 00:43:25.788990 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 24 00:43:25.791601 systemd[1]: Stopped target network.target - Network. Jan 24 00:43:25.796386 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 24 00:43:25.796443 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 24 00:43:25.816511 systemd[1]: Stopped target paths.target - Path Units. Jan 24 00:43:25.821382 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 24 00:43:25.826456 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:43:25.830114 systemd[1]: Stopped target slices.target - Slice Units. Jan 24 00:43:25.838848 systemd[1]: Stopped target sockets.target - Socket Units. Jan 24 00:43:25.841488 systemd[1]: iscsid.socket: Deactivated successfully. Jan 24 00:43:25.841523 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 24 00:43:25.844126 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 24 00:43:25.844169 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 24 00:43:25.844273 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 24 00:43:25.844316 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 24 00:43:25.844744 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 24 00:43:25.844778 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 24 00:43:25.845298 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 24 00:43:25.846116 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 24 00:43:25.847556 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 24 00:43:25.870392 systemd-networkd[899]: eth0: DHCPv6 lease lost Jan 24 00:43:25.872737 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 24 00:43:25.872843 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 24 00:43:25.879553 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 24 00:43:25.879651 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 24 00:43:25.886554 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 24 00:43:25.886626 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:43:25.909466 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 24 00:43:25.912793 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 24 00:43:25.912853 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 24 00:43:25.921679 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 24 00:43:25.921730 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:43:25.924572 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 24 00:43:25.924622 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 24 00:43:25.946444 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 24 00:43:25.946504 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:43:25.955673 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:43:25.980740 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 24 00:43:25.980899 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:43:25.988048 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 24 00:43:25.988113 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 24 00:43:25.994079 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 24 00:43:25.994119 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:43:26.010857 kernel: hv_netvsc 7c1e522d-7db2-7c1e-522d-7db27c1e522d eth0: Data path switched from VF: enP37755s1 Jan 24 00:43:25.997017 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 24 00:43:26.010849 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 24 00:43:26.013659 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 24 00:43:26.013705 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 24 00:43:26.014259 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 24 00:43:26.014299 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 24 00:43:26.025537 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 24 00:43:26.033321 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 24 00:43:26.033430 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:43:26.036790 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:43:26.036840 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:26.045727 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 24 00:43:26.045822 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 24 00:43:26.052131 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 24 00:43:26.052209 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 24 00:43:26.705455 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 24 00:43:26.705603 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 24 00:43:26.708685 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 24 00:43:26.718149 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 24 00:43:26.718218 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 24 00:43:26.728567 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 24 00:43:26.766493 systemd[1]: Switching root. Jan 24 00:43:26.856929 systemd-journald[177]: Journal stopped Jan 24 00:43:32.107793 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 24 00:43:32.107824 kernel: SELinux: policy capability network_peer_controls=1 Jan 24 00:43:32.107837 kernel: SELinux: policy capability open_perms=1 Jan 24 00:43:32.107847 kernel: SELinux: policy capability extended_socket_class=1 Jan 24 00:43:32.107856 kernel: SELinux: policy capability always_check_network=0 Jan 24 00:43:32.107866 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 24 00:43:32.107876 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 24 00:43:32.107889 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 24 00:43:32.107899 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 24 00:43:32.107909 kernel: audit: type=1403 audit(1769215408.185:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 24 00:43:32.107919 systemd[1]: Successfully loaded SELinux policy in 137.596ms. Jan 24 00:43:32.107932 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.633ms. Jan 24 00:43:32.107942 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 24 00:43:32.107954 systemd[1]: Detected virtualization microsoft. Jan 24 00:43:32.107969 systemd[1]: Detected architecture x86-64. Jan 24 00:43:32.107981 systemd[1]: Detected first boot. Jan 24 00:43:32.107992 systemd[1]: Hostname set to . Jan 24 00:43:32.108004 systemd[1]: Initializing machine ID from random generator. Jan 24 00:43:32.108014 zram_generator::config[1182]: No configuration found. Jan 24 00:43:32.108029 systemd[1]: Populated /etc with preset unit settings. Jan 24 00:43:32.108039 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 24 00:43:32.108051 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 24 00:43:32.108061 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 24 00:43:32.108074 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 24 00:43:32.108084 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 24 00:43:32.108096 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 24 00:43:32.108109 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 24 00:43:32.108121 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 24 00:43:32.108132 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 24 00:43:32.108143 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 24 00:43:32.108154 systemd[1]: Created slice user.slice - User and Session Slice. Jan 24 00:43:32.108164 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 24 00:43:32.108177 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 24 00:43:32.108189 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 24 00:43:32.108201 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 24 00:43:32.108215 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 24 00:43:32.108226 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 24 00:43:32.108238 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 24 00:43:32.108248 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 24 00:43:32.108261 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 24 00:43:32.108274 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 24 00:43:32.108287 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 24 00:43:32.108300 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 24 00:43:32.108313 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 24 00:43:32.113466 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 24 00:43:32.113503 systemd[1]: Reached target slices.target - Slice Units. Jan 24 00:43:32.113522 systemd[1]: Reached target swap.target - Swaps. Jan 24 00:43:32.113541 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 24 00:43:32.113559 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 24 00:43:32.113581 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 24 00:43:32.113599 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 24 00:43:32.113618 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 24 00:43:32.113636 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 24 00:43:32.113653 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 24 00:43:32.113674 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 24 00:43:32.113693 systemd[1]: Mounting media.mount - External Media Directory... Jan 24 00:43:32.113711 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:43:32.113729 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 24 00:43:32.113746 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 24 00:43:32.113764 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 24 00:43:32.113782 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 24 00:43:32.113800 systemd[1]: Reached target machines.target - Containers. Jan 24 00:43:32.113821 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 24 00:43:32.113839 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:43:32.113857 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 24 00:43:32.113875 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 24 00:43:32.113893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:43:32.113910 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:43:32.113928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:43:32.113946 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 24 00:43:32.113963 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:43:32.113984 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 24 00:43:32.114002 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 24 00:43:32.114020 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 24 00:43:32.114037 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 24 00:43:32.114055 systemd[1]: Stopped systemd-fsck-usr.service. Jan 24 00:43:32.114073 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 24 00:43:32.114091 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 24 00:43:32.114109 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 24 00:43:32.114131 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 24 00:43:32.114148 kernel: loop: module loaded Jan 24 00:43:32.114165 kernel: fuse: init (API version 7.39) Jan 24 00:43:32.114182 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 24 00:43:32.114200 systemd[1]: verity-setup.service: Deactivated successfully. Jan 24 00:43:32.114240 systemd-journald[1264]: Collecting audit messages is disabled. Jan 24 00:43:32.114283 systemd[1]: Stopped verity-setup.service. Jan 24 00:43:32.114302 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:43:32.114321 systemd-journald[1264]: Journal started Jan 24 00:43:32.114383 systemd-journald[1264]: Runtime Journal (/run/log/journal/b50ab60fdda24a038487ea69727aa1b5) is 8.0M, max 158.8M, 150.8M free. Jan 24 00:43:31.382083 systemd[1]: Queued start job for default target multi-user.target. Jan 24 00:43:31.541793 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 24 00:43:31.542149 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 24 00:43:32.127035 systemd[1]: Started systemd-journald.service - Journal Service. Jan 24 00:43:32.124200 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 24 00:43:32.131857 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 24 00:43:32.134689 systemd[1]: Mounted media.mount - External Media Directory. Jan 24 00:43:32.162320 kernel: ACPI: bus type drm_connector registered Jan 24 00:43:32.139795 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 24 00:43:32.142885 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 24 00:43:32.146056 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 24 00:43:32.149072 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 24 00:43:32.152643 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 24 00:43:32.156352 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 24 00:43:32.156522 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 24 00:43:32.162171 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:43:32.162806 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:43:32.166572 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:43:32.166871 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:43:32.170524 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:43:32.170712 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:43:32.174689 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 24 00:43:32.174970 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 24 00:43:32.178689 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:43:32.178926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:43:32.182441 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 24 00:43:32.186194 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 24 00:43:32.190345 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 24 00:43:32.208120 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 24 00:43:32.219420 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 24 00:43:32.230076 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 24 00:43:32.233375 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 24 00:43:32.233488 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 24 00:43:32.237662 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 24 00:43:32.249486 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 24 00:43:32.253677 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 24 00:43:32.256593 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:43:32.263136 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 24 00:43:32.271314 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 24 00:43:32.274875 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:43:32.275990 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 24 00:43:32.279148 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:43:32.283449 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 24 00:43:32.288498 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 24 00:43:32.292795 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 24 00:43:32.298028 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 24 00:43:32.303952 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 24 00:43:32.307550 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 24 00:43:32.311601 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 24 00:43:32.315447 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 24 00:43:32.317738 systemd-journald[1264]: Time spent on flushing to /var/log/journal/b50ab60fdda24a038487ea69727aa1b5 is 31.901ms for 958 entries. Jan 24 00:43:32.317738 systemd-journald[1264]: System Journal (/var/log/journal/b50ab60fdda24a038487ea69727aa1b5) is 8.0M, max 2.6G, 2.6G free. Jan 24 00:43:32.369746 systemd-journald[1264]: Received client request to flush runtime journal. Jan 24 00:43:32.326228 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 24 00:43:32.341558 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 24 00:43:32.351508 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 24 00:43:32.376065 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 24 00:43:32.380675 kernel: loop0: detected capacity change from 0 to 31056 Jan 24 00:43:32.386386 udevadm[1328]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 24 00:43:32.404297 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 24 00:43:32.405838 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 24 00:43:32.428618 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 24 00:43:32.441488 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 24 00:43:32.482205 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 24 00:43:32.551159 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jan 24 00:43:32.551188 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jan 24 00:43:32.557242 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 24 00:43:32.872784 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 24 00:43:32.913355 kernel: loop1: detected capacity change from 0 to 219144 Jan 24 00:43:33.016353 kernel: loop2: detected capacity change from 0 to 142488 Jan 24 00:43:33.306042 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 24 00:43:33.314638 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 24 00:43:33.338452 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Jan 24 00:43:33.637457 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 24 00:43:33.670874 kernel: loop3: detected capacity change from 0 to 140768 Jan 24 00:43:33.661479 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 24 00:43:33.705490 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 24 00:43:33.796410 kernel: mousedev: PS/2 mouse device common for all mice Jan 24 00:43:33.800364 kernel: hv_vmbus: registering driver hv_balloon Jan 24 00:43:33.818593 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 24 00:43:33.835360 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 24 00:43:33.845388 kernel: hv_vmbus: registering driver hyperv_fb Jan 24 00:43:33.849360 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 24 00:43:33.855434 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 24 00:43:33.861683 kernel: Console: switching to colour dummy device 80x25 Jan 24 00:43:33.867228 kernel: Console: switching to colour frame buffer device 128x48 Jan 24 00:43:33.903694 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:43:33.916470 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 24 00:43:34.052651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 24 00:43:34.052872 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:34.071543 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 24 00:43:34.111346 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1354) Jan 24 00:43:34.173975 systemd-networkd[1356]: lo: Link UP Jan 24 00:43:34.173990 systemd-networkd[1356]: lo: Gained carrier Jan 24 00:43:34.179237 systemd-networkd[1356]: Enumeration completed Jan 24 00:43:34.180508 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 24 00:43:34.184208 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:43:34.184221 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:43:34.188583 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 24 00:43:34.214062 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 24 00:43:34.225380 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 24 00:43:34.243358 kernel: mlx5_core 937b:00:02.0 enP37755s1: Link up Jan 24 00:43:34.255419 kernel: loop4: detected capacity change from 0 to 31056 Jan 24 00:43:34.262769 kernel: hv_netvsc 7c1e522d-7db2-7c1e-522d-7db27c1e522d eth0: Data path switched to VF: enP37755s1 Jan 24 00:43:34.263514 systemd-networkd[1356]: enP37755s1: Link UP Jan 24 00:43:34.263673 systemd-networkd[1356]: eth0: Link UP Jan 24 00:43:34.263678 systemd-networkd[1356]: eth0: Gained carrier Jan 24 00:43:34.263708 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:43:34.268650 systemd-networkd[1356]: enP37755s1: Gained carrier Jan 24 00:43:34.274357 kernel: loop5: detected capacity change from 0 to 219144 Jan 24 00:43:34.300402 kernel: loop6: detected capacity change from 0 to 142488 Jan 24 00:43:34.309413 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 24 00:43:34.314577 systemd-networkd[1356]: eth0: DHCPv4 address 10.200.4.34/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:43:34.325340 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 24 00:43:34.345352 kernel: loop7: detected capacity change from 0 to 140768 Jan 24 00:43:34.363882 (sd-merge)[1434]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 24 00:43:34.364481 (sd-merge)[1434]: Merged extensions into '/usr'. Jan 24 00:43:34.380088 systemd[1]: Reloading requested from client PID 1318 ('systemd-sysext') (unit systemd-sysext.service)... Jan 24 00:43:34.380104 systemd[1]: Reloading... Jan 24 00:43:34.469354 zram_generator::config[1470]: No configuration found. Jan 24 00:43:34.610993 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:43:34.693839 systemd[1]: Reloading finished in 313 ms. Jan 24 00:43:34.727429 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 24 00:43:34.731775 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 24 00:43:34.735613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 24 00:43:34.749477 systemd[1]: Starting ensure-sysext.service... Jan 24 00:43:34.753496 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 24 00:43:34.758496 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 24 00:43:34.774387 systemd[1]: Reloading requested from client PID 1529 ('systemctl') (unit ensure-sysext.service)... Jan 24 00:43:34.774403 systemd[1]: Reloading... Jan 24 00:43:34.803009 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 24 00:43:34.804081 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 24 00:43:34.807574 systemd-tmpfiles[1531]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 24 00:43:34.808120 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Jan 24 00:43:34.808302 systemd-tmpfiles[1531]: ACLs are not supported, ignoring. Jan 24 00:43:34.832359 zram_generator::config[1557]: No configuration found. Jan 24 00:43:34.844498 systemd-tmpfiles[1531]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:43:34.844514 systemd-tmpfiles[1531]: Skipping /boot Jan 24 00:43:34.861353 lvm[1530]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:43:34.870481 systemd-tmpfiles[1531]: Detected autofs mount point /boot during canonicalization of boot. Jan 24 00:43:34.870618 systemd-tmpfiles[1531]: Skipping /boot Jan 24 00:43:35.018716 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:43:35.096644 systemd[1]: Reloading finished in 321 ms. Jan 24 00:43:35.117792 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 24 00:43:35.122709 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 24 00:43:35.132479 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 24 00:43:35.140585 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:43:35.145720 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 24 00:43:35.151596 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 24 00:43:35.163598 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 24 00:43:35.172537 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 24 00:43:35.175214 lvm[1626]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 24 00:43:35.182590 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 24 00:43:35.190676 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:43:35.190940 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:43:35.199367 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:43:35.213587 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:43:35.220015 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:43:35.223117 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:43:35.223396 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:43:35.225060 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:43:35.225254 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:43:35.233579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:43:35.233736 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:43:35.241819 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 24 00:43:35.246232 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:43:35.246567 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:43:35.258228 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:43:35.259055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:43:35.260607 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:43:35.272627 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:43:35.285915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 24 00:43:35.290899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:43:35.291058 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:43:35.293818 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 24 00:43:35.298355 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:43:35.298679 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:43:35.306788 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:43:35.306954 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:43:35.322746 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 24 00:43:35.322953 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 24 00:43:35.327530 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 24 00:43:35.332992 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:43:35.333487 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 24 00:43:35.337487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 24 00:43:35.343175 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 24 00:43:35.350469 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 24 00:43:35.353877 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 24 00:43:35.353970 systemd[1]: Reached target time-set.target - System Time Set. Jan 24 00:43:35.357056 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 24 00:43:35.357686 systemd[1]: Finished ensure-sysext.service. Jan 24 00:43:35.360685 augenrules[1657]: No rules Jan 24 00:43:35.360951 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 24 00:43:35.361120 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 24 00:43:35.366189 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:43:35.366640 systemd-networkd[1356]: eth0: Gained IPv6LL Jan 24 00:43:35.370543 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 24 00:43:35.370742 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 24 00:43:35.374088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 24 00:43:35.374507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 24 00:43:35.378248 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 24 00:43:35.386788 systemd-resolved[1628]: Positive Trust Anchors: Jan 24 00:43:35.386815 systemd-resolved[1628]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 24 00:43:35.386871 systemd-resolved[1628]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 24 00:43:35.387767 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 24 00:43:35.387879 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 24 00:43:35.427347 systemd-resolved[1628]: Using system hostname 'ci-4081.3.6-n-e69c55f9b7'. Jan 24 00:43:35.429645 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 24 00:43:35.433359 systemd[1]: Reached target network.target - Network. Jan 24 00:43:35.435731 systemd[1]: Reached target network-online.target - Network is Online. Jan 24 00:43:35.438718 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 24 00:43:35.892296 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 24 00:43:35.896983 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 24 00:43:38.963759 ldconfig[1313]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 24 00:43:38.977785 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 24 00:43:38.985539 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 24 00:43:39.013291 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 24 00:43:39.016644 systemd[1]: Reached target sysinit.target - System Initialization. Jan 24 00:43:39.019453 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 24 00:43:39.022687 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 24 00:43:39.025958 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 24 00:43:39.028916 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 24 00:43:39.032354 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 24 00:43:39.035669 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 24 00:43:39.035708 systemd[1]: Reached target paths.target - Path Units. Jan 24 00:43:39.038121 systemd[1]: Reached target timers.target - Timer Units. Jan 24 00:43:39.041714 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 24 00:43:39.046093 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 24 00:43:39.058265 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 24 00:43:39.062552 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 24 00:43:39.065669 systemd[1]: Reached target sockets.target - Socket Units. Jan 24 00:43:39.068274 systemd[1]: Reached target basic.target - Basic System. Jan 24 00:43:39.070906 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:43:39.070966 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 24 00:43:39.076421 systemd[1]: Starting chronyd.service - NTP client/server... Jan 24 00:43:39.082455 systemd[1]: Starting containerd.service - containerd container runtime... Jan 24 00:43:39.094844 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 24 00:43:39.100564 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 24 00:43:39.107446 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 24 00:43:39.117506 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 24 00:43:39.120282 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 24 00:43:39.120367 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 24 00:43:39.122484 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 24 00:43:39.127518 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 24 00:43:39.128473 jq[1683]: false Jan 24 00:43:39.129869 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:43:39.139529 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 24 00:43:39.146578 (chronyd)[1679]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 24 00:43:39.149490 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 24 00:43:39.158231 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 24 00:43:39.162486 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 24 00:43:39.170486 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 24 00:43:39.180777 KVP[1687]: KVP starting; pid is:1687 Jan 24 00:43:39.184553 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 24 00:43:39.188955 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 24 00:43:39.189552 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 24 00:43:39.191134 systemd[1]: Starting update-engine.service - Update Engine... Jan 24 00:43:39.201135 extend-filesystems[1686]: Found loop4 Jan 24 00:43:39.212523 kernel: hv_utils: KVP IC version 4.0 Jan 24 00:43:39.209308 KVP[1687]: KVP LIC Version: 3.1 Jan 24 00:43:39.212643 extend-filesystems[1686]: Found loop5 Jan 24 00:43:39.212643 extend-filesystems[1686]: Found loop6 Jan 24 00:43:39.212643 extend-filesystems[1686]: Found loop7 Jan 24 00:43:39.212643 extend-filesystems[1686]: Found sda Jan 24 00:43:39.203269 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 24 00:43:39.209860 chronyd[1705]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 24 00:43:39.223040 extend-filesystems[1686]: Found sda1 Jan 24 00:43:39.223040 extend-filesystems[1686]: Found sda2 Jan 24 00:43:39.223040 extend-filesystems[1686]: Found sda3 Jan 24 00:43:39.223040 extend-filesystems[1686]: Found usr Jan 24 00:43:39.223040 extend-filesystems[1686]: Found sda4 Jan 24 00:43:39.223040 extend-filesystems[1686]: Found sda6 Jan 24 00:43:39.223040 extend-filesystems[1686]: Found sda7 Jan 24 00:43:39.223040 extend-filesystems[1686]: Found sda9 Jan 24 00:43:39.223040 extend-filesystems[1686]: Checking size of /dev/sda9 Jan 24 00:43:39.220863 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 24 00:43:39.221063 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 24 00:43:39.225548 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 24 00:43:39.225744 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 24 00:43:39.271694 jq[1701]: true Jan 24 00:43:39.246564 systemd[1]: motdgen.service: Deactivated successfully. Jan 24 00:43:39.246715 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 24 00:43:39.279388 chronyd[1705]: Timezone right/UTC failed leap second check, ignoring Jan 24 00:43:39.284687 systemd[1]: Started chronyd.service - NTP client/server. Jan 24 00:43:39.279612 chronyd[1705]: Loaded seccomp filter (level 2) Jan 24 00:43:39.298698 (ntainerd)[1714]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 24 00:43:39.309353 jq[1712]: true Jan 24 00:43:39.320616 update_engine[1699]: I20260124 00:43:39.318675 1699 main.cc:92] Flatcar Update Engine starting Jan 24 00:43:39.331384 extend-filesystems[1686]: Old size kept for /dev/sda9 Jan 24 00:43:39.331384 extend-filesystems[1686]: Found sr0 Jan 24 00:43:39.342676 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 24 00:43:39.343252 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 24 00:43:39.348404 tar[1711]: linux-amd64/LICENSE Jan 24 00:43:39.348656 tar[1711]: linux-amd64/helm Jan 24 00:43:39.391945 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 24 00:43:39.402930 systemd-logind[1697]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 24 00:43:39.405292 systemd-logind[1697]: New seat seat0. Jan 24 00:43:39.408095 dbus-daemon[1682]: [system] SELinux support is enabled Jan 24 00:43:39.408420 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 24 00:43:39.422444 systemd[1]: Started systemd-logind.service - User Login Management. Jan 24 00:43:39.427965 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 24 00:43:39.428005 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 24 00:43:39.431684 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 24 00:43:39.431711 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 24 00:43:39.440785 update_engine[1699]: I20260124 00:43:39.439924 1699 update_check_scheduler.cc:74] Next update check in 5m33s Jan 24 00:43:39.440876 systemd[1]: Started update-engine.service - Update Engine. Jan 24 00:43:39.446391 dbus-daemon[1682]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 24 00:43:39.458652 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 24 00:43:39.559848 bash[1756]: Updated "/home/core/.ssh/authorized_keys" Jan 24 00:43:39.561762 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 24 00:43:39.570411 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 24 00:43:39.630416 coreos-metadata[1681]: Jan 24 00:43:39.627 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 24 00:43:39.633859 coreos-metadata[1681]: Jan 24 00:43:39.631 INFO Fetch successful Jan 24 00:43:39.633859 coreos-metadata[1681]: Jan 24 00:43:39.631 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 24 00:43:39.643511 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1763) Jan 24 00:43:39.643583 coreos-metadata[1681]: Jan 24 00:43:39.639 INFO Fetch successful Jan 24 00:43:39.643583 coreos-metadata[1681]: Jan 24 00:43:39.640 INFO Fetching http://168.63.129.16/machine/a28a773e-444c-46e2-9592-c7ae4118a5c6/986393bb%2D55d2%2D45ca%2D9b0f%2D683fb92545f2.%5Fci%2D4081.3.6%2Dn%2De69c55f9b7?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 24 00:43:39.644402 coreos-metadata[1681]: Jan 24 00:43:39.644 INFO Fetch successful Jan 24 00:43:39.644402 coreos-metadata[1681]: Jan 24 00:43:39.644 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 24 00:43:39.663219 coreos-metadata[1681]: Jan 24 00:43:39.663 INFO Fetch successful Jan 24 00:43:39.755251 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 24 00:43:39.761059 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 24 00:43:39.892915 locksmithd[1757]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 24 00:43:39.974360 sshd_keygen[1729]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 24 00:43:40.030296 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 24 00:43:40.047623 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 24 00:43:40.061135 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 24 00:43:40.086195 systemd[1]: issuegen.service: Deactivated successfully. Jan 24 00:43:40.086681 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 24 00:43:40.091863 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 24 00:43:40.105808 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 24 00:43:40.142526 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 24 00:43:40.154542 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 24 00:43:40.167756 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 24 00:43:40.171216 systemd[1]: Reached target getty.target - Login Prompts. Jan 24 00:43:40.438973 tar[1711]: linux-amd64/README.md Jan 24 00:43:40.449366 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 24 00:43:40.516464 containerd[1714]: time="2026-01-24T00:43:40.514941800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 24 00:43:40.548934 containerd[1714]: time="2026-01-24T00:43:40.548884800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:43:40.550677 containerd[1714]: time="2026-01-24T00:43:40.550637700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:43:40.550677 containerd[1714]: time="2026-01-24T00:43:40.550670000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 24 00:43:40.550839 containerd[1714]: time="2026-01-24T00:43:40.550690400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 24 00:43:40.550879 containerd[1714]: time="2026-01-24T00:43:40.550860900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 24 00:43:40.550922 containerd[1714]: time="2026-01-24T00:43:40.550888700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 24 00:43:40.552368 containerd[1714]: time="2026-01-24T00:43:40.550967700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:43:40.552368 containerd[1714]: time="2026-01-24T00:43:40.550989100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:43:40.552368 containerd[1714]: time="2026-01-24T00:43:40.551202000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:43:40.552368 containerd[1714]: time="2026-01-24T00:43:40.551222800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 24 00:43:40.552368 containerd[1714]: time="2026-01-24T00:43:40.551243000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:43:40.552368 containerd[1714]: time="2026-01-24T00:43:40.551258500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 24 00:43:40.552368 containerd[1714]: time="2026-01-24T00:43:40.551378800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:43:40.552368 containerd[1714]: time="2026-01-24T00:43:40.551625200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 24 00:43:40.552368 containerd[1714]: time="2026-01-24T00:43:40.551793200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 24 00:43:40.552368 containerd[1714]: time="2026-01-24T00:43:40.551812900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 24 00:43:40.552368 containerd[1714]: time="2026-01-24T00:43:40.551933600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 24 00:43:40.552693 containerd[1714]: time="2026-01-24T00:43:40.551992500Z" level=info msg="metadata content store policy set" policy=shared Jan 24 00:43:40.568765 containerd[1714]: time="2026-01-24T00:43:40.568728200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 24 00:43:40.568872 containerd[1714]: time="2026-01-24T00:43:40.568787600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 24 00:43:40.568872 containerd[1714]: time="2026-01-24T00:43:40.568812100Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 24 00:43:40.568872 containerd[1714]: time="2026-01-24T00:43:40.568831900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 24 00:43:40.568872 containerd[1714]: time="2026-01-24T00:43:40.568850100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 24 00:43:40.569027 containerd[1714]: time="2026-01-24T00:43:40.569008100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.569861700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570020700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570044800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570068900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570093400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570117500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570139000Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570165300Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570187400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570228100Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570253600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570274700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570306400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571006 containerd[1714]: time="2026-01-24T00:43:40.570344000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570367100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570406900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570426000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570448600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570471500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570495700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570519000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570546200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570567500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570585500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570607500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570634100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570666100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.570688400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.571572 containerd[1714]: time="2026-01-24T00:43:40.571098400Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 24 00:43:40.572132 containerd[1714]: time="2026-01-24T00:43:40.571205200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 24 00:43:40.572132 containerd[1714]: time="2026-01-24T00:43:40.571236300Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 24 00:43:40.572132 containerd[1714]: time="2026-01-24T00:43:40.571253800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 24 00:43:40.572132 containerd[1714]: time="2026-01-24T00:43:40.571277800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 24 00:43:40.572132 containerd[1714]: time="2026-01-24T00:43:40.571297900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.572132 containerd[1714]: time="2026-01-24T00:43:40.571320900Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 24 00:43:40.572132 containerd[1714]: time="2026-01-24T00:43:40.571358200Z" level=info msg="NRI interface is disabled by configuration." Jan 24 00:43:40.572132 containerd[1714]: time="2026-01-24T00:43:40.571379000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 24 00:43:40.572418 containerd[1714]: time="2026-01-24T00:43:40.571798900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 24 00:43:40.572418 containerd[1714]: time="2026-01-24T00:43:40.571886400Z" level=info msg="Connect containerd service" Jan 24 00:43:40.572418 containerd[1714]: time="2026-01-24T00:43:40.571951200Z" level=info msg="using legacy CRI server" Jan 24 00:43:40.572418 containerd[1714]: time="2026-01-24T00:43:40.571963000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 24 00:43:40.572418 containerd[1714]: time="2026-01-24T00:43:40.572096300Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 24 00:43:40.572993 containerd[1714]: time="2026-01-24T00:43:40.572952600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 24 00:43:40.573565 containerd[1714]: time="2026-01-24T00:43:40.573537200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 24 00:43:40.573632 containerd[1714]: time="2026-01-24T00:43:40.573605100Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 24 00:43:40.573669 containerd[1714]: time="2026-01-24T00:43:40.573622400Z" level=info msg="Start subscribing containerd event" Jan 24 00:43:40.573716 containerd[1714]: time="2026-01-24T00:43:40.573672200Z" level=info msg="Start recovering state" Jan 24 00:43:40.573772 containerd[1714]: time="2026-01-24T00:43:40.573740500Z" level=info msg="Start event monitor" Jan 24 00:43:40.573772 containerd[1714]: time="2026-01-24T00:43:40.573763300Z" level=info msg="Start snapshots syncer" Jan 24 00:43:40.573845 containerd[1714]: time="2026-01-24T00:43:40.573775700Z" level=info msg="Start cni network conf syncer for default" Jan 24 00:43:40.573845 containerd[1714]: time="2026-01-24T00:43:40.573786900Z" level=info msg="Start streaming server" Jan 24 00:43:40.573960 systemd[1]: Started containerd.service - containerd container runtime. Jan 24 00:43:40.578320 containerd[1714]: time="2026-01-24T00:43:40.578237200Z" level=info msg="containerd successfully booted in 0.064840s" Jan 24 00:43:41.007967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:43:41.012538 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 24 00:43:41.013198 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:43:41.015748 systemd[1]: Startup finished in 892ms (firmware) + 18.738s (loader) + 978ms (kernel) + 11.928s (initrd) + 12.966s (userspace) = 45.504s. Jan 24 00:43:41.478137 login[1824]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 00:43:41.478573 login[1823]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 24 00:43:41.491797 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 24 00:43:41.499759 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 24 00:43:41.502112 systemd-logind[1697]: New session 1 of user core. Jan 24 00:43:41.512888 systemd-logind[1697]: New session 2 of user core. Jan 24 00:43:41.533368 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 24 00:43:41.539692 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 24 00:43:41.575258 (systemd)[1852]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 24 00:43:41.652366 kubelet[1840]: E0124 00:43:41.652146 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:43:41.656558 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:43:41.656750 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:43:41.735956 systemd[1852]: Queued start job for default target default.target. Jan 24 00:43:41.742418 systemd[1852]: Created slice app.slice - User Application Slice. Jan 24 00:43:41.742453 systemd[1852]: Reached target paths.target - Paths. Jan 24 00:43:41.742471 systemd[1852]: Reached target timers.target - Timers. Jan 24 00:43:41.745483 systemd[1852]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 24 00:43:41.755861 systemd[1852]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 24 00:43:41.755930 systemd[1852]: Reached target sockets.target - Sockets. Jan 24 00:43:41.755947 systemd[1852]: Reached target basic.target - Basic System. Jan 24 00:43:41.755989 systemd[1852]: Reached target default.target - Main User Target. Jan 24 00:43:41.756031 systemd[1852]: Startup finished in 172ms. Jan 24 00:43:41.756434 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 24 00:43:41.765709 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 24 00:43:41.768093 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 24 00:43:42.181268 waagent[1818]: 2026-01-24T00:43:42.181171Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 24 00:43:42.184545 waagent[1818]: 2026-01-24T00:43:42.184482Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 24 00:43:42.187258 waagent[1818]: 2026-01-24T00:43:42.187199Z INFO Daemon Daemon Python: 3.11.9 Jan 24 00:43:42.189595 waagent[1818]: 2026-01-24T00:43:42.189535Z INFO Daemon Daemon Run daemon Jan 24 00:43:42.191887 waagent[1818]: 2026-01-24T00:43:42.191835Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 24 00:43:42.196538 waagent[1818]: 2026-01-24T00:43:42.196309Z INFO Daemon Daemon Using waagent for provisioning Jan 24 00:43:42.199163 waagent[1818]: 2026-01-24T00:43:42.199111Z INFO Daemon Daemon Activate resource disk Jan 24 00:43:42.201674 waagent[1818]: 2026-01-24T00:43:42.201622Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 24 00:43:42.208893 waagent[1818]: 2026-01-24T00:43:42.208836Z INFO Daemon Daemon Found device: None Jan 24 00:43:42.211238 waagent[1818]: 2026-01-24T00:43:42.211187Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 24 00:43:42.215461 waagent[1818]: 2026-01-24T00:43:42.215410Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 24 00:43:42.222444 waagent[1818]: 2026-01-24T00:43:42.222383Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 24 00:43:42.227689 waagent[1818]: 2026-01-24T00:43:42.222675Z INFO Daemon Daemon Running default provisioning handler Jan 24 00:43:42.230980 waagent[1818]: 2026-01-24T00:43:42.230929Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 24 00:43:42.237750 waagent[1818]: 2026-01-24T00:43:42.237701Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 24 00:43:42.246779 waagent[1818]: 2026-01-24T00:43:42.242315Z INFO Daemon Daemon cloud-init is enabled: False Jan 24 00:43:42.246779 waagent[1818]: 2026-01-24T00:43:42.242536Z INFO Daemon Daemon Copying ovf-env.xml Jan 24 00:43:42.327926 waagent[1818]: 2026-01-24T00:43:42.324459Z INFO Daemon Daemon Successfully mounted dvd Jan 24 00:43:42.355432 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 24 00:43:42.357793 waagent[1818]: 2026-01-24T00:43:42.357729Z INFO Daemon Daemon Detect protocol endpoint Jan 24 00:43:42.374504 waagent[1818]: 2026-01-24T00:43:42.358077Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 24 00:43:42.374504 waagent[1818]: 2026-01-24T00:43:42.359374Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 24 00:43:42.374504 waagent[1818]: 2026-01-24T00:43:42.359883Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 24 00:43:42.374504 waagent[1818]: 2026-01-24T00:43:42.361039Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 24 00:43:42.374504 waagent[1818]: 2026-01-24T00:43:42.361995Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 24 00:43:42.428392 waagent[1818]: 2026-01-24T00:43:42.428304Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 24 00:43:42.437098 waagent[1818]: 2026-01-24T00:43:42.428943Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 24 00:43:42.437098 waagent[1818]: 2026-01-24T00:43:42.430386Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 24 00:43:42.575689 waagent[1818]: 2026-01-24T00:43:42.575590Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 24 00:43:42.579718 waagent[1818]: 2026-01-24T00:43:42.579654Z INFO Daemon Daemon Forcing an update of the goal state. Jan 24 00:43:42.586184 waagent[1818]: 2026-01-24T00:43:42.586133Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 24 00:43:42.602223 waagent[1818]: 2026-01-24T00:43:42.602169Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.179 Jan 24 00:43:42.618516 waagent[1818]: 2026-01-24T00:43:42.602823Z INFO Daemon Jan 24 00:43:42.618516 waagent[1818]: 2026-01-24T00:43:42.603403Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: c1368ede-ae8a-478b-a28d-e2ffae0aa48e eTag: 16350381034214168189 source: Fabric] Jan 24 00:43:42.618516 waagent[1818]: 2026-01-24T00:43:42.604207Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 24 00:43:42.618516 waagent[1818]: 2026-01-24T00:43:42.605436Z INFO Daemon Jan 24 00:43:42.618516 waagent[1818]: 2026-01-24T00:43:42.606345Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 24 00:43:42.620710 waagent[1818]: 2026-01-24T00:43:42.620668Z INFO Daemon Daemon Downloading artifacts profile blob Jan 24 00:43:42.688090 waagent[1818]: 2026-01-24T00:43:42.687963Z INFO Daemon Downloaded certificate {'thumbprint': '6B068E7114567446D724D2574B0BBA050758371A', 'hasPrivateKey': True} Jan 24 00:43:42.694228 waagent[1818]: 2026-01-24T00:43:42.694166Z INFO Daemon Fetch goal state completed Jan 24 00:43:42.701342 waagent[1818]: 2026-01-24T00:43:42.701284Z INFO Daemon Daemon Starting provisioning Jan 24 00:43:42.708694 waagent[1818]: 2026-01-24T00:43:42.701518Z INFO Daemon Daemon Handle ovf-env.xml. Jan 24 00:43:42.708694 waagent[1818]: 2026-01-24T00:43:42.702090Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-e69c55f9b7] Jan 24 00:43:42.775155 waagent[1818]: 2026-01-24T00:43:42.775056Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-e69c55f9b7] Jan 24 00:43:42.778951 waagent[1818]: 2026-01-24T00:43:42.778880Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 24 00:43:42.782287 waagent[1818]: 2026-01-24T00:43:42.782228Z INFO Daemon Daemon Primary interface is [eth0] Jan 24 00:43:42.824958 systemd-networkd[1356]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 24 00:43:42.824973 systemd-networkd[1356]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 24 00:43:42.825026 systemd-networkd[1356]: eth0: DHCP lease lost Jan 24 00:43:42.826703 waagent[1818]: 2026-01-24T00:43:42.826604Z INFO Daemon Daemon Create user account if not exists Jan 24 00:43:42.829539 systemd-networkd[1356]: eth0: DHCPv6 lease lost Jan 24 00:43:42.829921 waagent[1818]: 2026-01-24T00:43:42.829817Z INFO Daemon Daemon User core already exists, skip useradd Jan 24 00:43:42.832915 waagent[1818]: 2026-01-24T00:43:42.832781Z INFO Daemon Daemon Configure sudoer Jan 24 00:43:42.845030 waagent[1818]: 2026-01-24T00:43:42.833247Z INFO Daemon Daemon Configure sshd Jan 24 00:43:42.845030 waagent[1818]: 2026-01-24T00:43:42.835294Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 24 00:43:42.845030 waagent[1818]: 2026-01-24T00:43:42.835979Z INFO Daemon Daemon Deploy ssh public key. Jan 24 00:43:42.871380 systemd-networkd[1356]: eth0: DHCPv4 address 10.200.4.34/24, gateway 10.200.4.1 acquired from 168.63.129.16 Jan 24 00:43:43.971216 waagent[1818]: 2026-01-24T00:43:43.971114Z INFO Daemon Daemon Provisioning complete Jan 24 00:43:43.982191 waagent[1818]: 2026-01-24T00:43:43.982139Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 24 00:43:43.989366 waagent[1818]: 2026-01-24T00:43:43.982422Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 24 00:43:43.989366 waagent[1818]: 2026-01-24T00:43:43.983350Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 24 00:43:44.104836 waagent[1904]: 2026-01-24T00:43:44.104747Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 24 00:43:44.105261 waagent[1904]: 2026-01-24T00:43:44.104896Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 24 00:43:44.105261 waagent[1904]: 2026-01-24T00:43:44.104975Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 24 00:43:44.152307 waagent[1904]: 2026-01-24T00:43:44.152213Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 24 00:43:44.152568 waagent[1904]: 2026-01-24T00:43:44.152511Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 24 00:43:44.152679 waagent[1904]: 2026-01-24T00:43:44.152628Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 24 00:43:44.160315 waagent[1904]: 2026-01-24T00:43:44.160252Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 24 00:43:44.164763 waagent[1904]: 2026-01-24T00:43:44.164712Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.179 Jan 24 00:43:44.165185 waagent[1904]: 2026-01-24T00:43:44.165131Z INFO ExtHandler Jan 24 00:43:44.165256 waagent[1904]: 2026-01-24T00:43:44.165217Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: aeff49f8-8b85-4b84-8fe7-adc73b7e3202 eTag: 16350381034214168189 source: Fabric] Jan 24 00:43:44.165585 waagent[1904]: 2026-01-24T00:43:44.165534Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 24 00:43:44.166125 waagent[1904]: 2026-01-24T00:43:44.166068Z INFO ExtHandler Jan 24 00:43:44.166186 waagent[1904]: 2026-01-24T00:43:44.166150Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 24 00:43:44.169252 waagent[1904]: 2026-01-24T00:43:44.169210Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 24 00:43:44.266420 waagent[1904]: 2026-01-24T00:43:44.264962Z INFO ExtHandler Downloaded certificate {'thumbprint': '6B068E7114567446D724D2574B0BBA050758371A', 'hasPrivateKey': True} Jan 24 00:43:44.266420 waagent[1904]: 2026-01-24T00:43:44.265897Z INFO ExtHandler Fetch goal state completed Jan 24 00:43:44.279342 waagent[1904]: 2026-01-24T00:43:44.279265Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1904 Jan 24 00:43:44.279506 waagent[1904]: 2026-01-24T00:43:44.279456Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 24 00:43:44.281061 waagent[1904]: 2026-01-24T00:43:44.281002Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 24 00:43:44.281435 waagent[1904]: 2026-01-24T00:43:44.281384Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 24 00:43:44.991143 waagent[1904]: 2026-01-24T00:43:44.991084Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 24 00:43:44.991431 waagent[1904]: 2026-01-24T00:43:44.991370Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 24 00:43:44.998460 waagent[1904]: 2026-01-24T00:43:44.998351Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 24 00:43:45.005274 systemd[1]: Reloading requested from client PID 1917 ('systemctl') (unit waagent.service)... Jan 24 00:43:45.005290 systemd[1]: Reloading... Jan 24 00:43:45.090402 zram_generator::config[1949]: No configuration found. Jan 24 00:43:45.214658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:43:45.300131 systemd[1]: Reloading finished in 294 ms. Jan 24 00:43:45.325340 waagent[1904]: 2026-01-24T00:43:45.324915Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 24 00:43:45.335758 systemd[1]: Reloading requested from client PID 2008 ('systemctl') (unit waagent.service)... Jan 24 00:43:45.335774 systemd[1]: Reloading... Jan 24 00:43:45.401499 zram_generator::config[2039]: No configuration found. Jan 24 00:43:45.534844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:43:45.611139 systemd[1]: Reloading finished in 274 ms. Jan 24 00:43:45.639342 waagent[1904]: 2026-01-24T00:43:45.637802Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 24 00:43:45.639342 waagent[1904]: 2026-01-24T00:43:45.638005Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 24 00:43:45.960038 waagent[1904]: 2026-01-24T00:43:45.959878Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 24 00:43:45.960649 waagent[1904]: 2026-01-24T00:43:45.960578Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 24 00:43:45.961560 waagent[1904]: 2026-01-24T00:43:45.961496Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 24 00:43:45.962389 waagent[1904]: 2026-01-24T00:43:45.962153Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 24 00:43:45.962389 waagent[1904]: 2026-01-24T00:43:45.962266Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 24 00:43:45.962511 waagent[1904]: 2026-01-24T00:43:45.962413Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 24 00:43:45.962629 waagent[1904]: 2026-01-24T00:43:45.962556Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 24 00:43:45.962733 waagent[1904]: 2026-01-24T00:43:45.962681Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 24 00:43:45.962947 waagent[1904]: 2026-01-24T00:43:45.962888Z INFO EnvHandler ExtHandler Configure routes Jan 24 00:43:45.963049 waagent[1904]: 2026-01-24T00:43:45.963001Z INFO EnvHandler ExtHandler Gateway:None Jan 24 00:43:45.963143 waagent[1904]: 2026-01-24T00:43:45.963101Z INFO EnvHandler ExtHandler Routes:None Jan 24 00:43:45.965346 waagent[1904]: 2026-01-24T00:43:45.964056Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 24 00:43:45.965346 waagent[1904]: 2026-01-24T00:43:45.964297Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 24 00:43:45.965346 waagent[1904]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 24 00:43:45.965346 waagent[1904]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Jan 24 00:43:45.965346 waagent[1904]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 24 00:43:45.965346 waagent[1904]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 24 00:43:45.965346 waagent[1904]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 24 00:43:45.965346 waagent[1904]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 24 00:43:45.965346 waagent[1904]: 2026-01-24T00:43:45.964498Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 24 00:43:45.965346 waagent[1904]: 2026-01-24T00:43:45.964629Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 24 00:43:45.965346 waagent[1904]: 2026-01-24T00:43:45.964975Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 24 00:43:45.965734 waagent[1904]: 2026-01-24T00:43:45.964906Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 24 00:43:45.965734 waagent[1904]: 2026-01-24T00:43:45.965651Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 24 00:43:45.973241 waagent[1904]: 2026-01-24T00:43:45.973201Z INFO ExtHandler ExtHandler Jan 24 00:43:45.975355 waagent[1904]: 2026-01-24T00:43:45.973299Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5dc254ac-dc36-404b-bae3-76fac717dfe0 correlation 386b1bf8-49dc-47e4-ab5f-7ba9edfec0c8 created: 2026-01-24T00:42:43.896523Z] Jan 24 00:43:45.975355 waagent[1904]: 2026-01-24T00:43:45.973776Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 24 00:43:45.975840 waagent[1904]: 2026-01-24T00:43:45.975776Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 24 00:43:46.008911 waagent[1904]: 2026-01-24T00:43:46.008853Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 4978E494-5EC7-4470-BD52-17574879EB92;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 24 00:43:46.016784 waagent[1904]: 2026-01-24T00:43:46.016726Z INFO MonitorHandler ExtHandler Network interfaces: Jan 24 00:43:46.016784 waagent[1904]: Executing ['ip', '-a', '-o', 'link']: Jan 24 00:43:46.016784 waagent[1904]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 24 00:43:46.016784 waagent[1904]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2d:7d:b2 brd ff:ff:ff:ff:ff:ff Jan 24 00:43:46.016784 waagent[1904]: 3: enP37755s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:2d:7d:b2 brd ff:ff:ff:ff:ff:ff\ altname enP37755p0s2 Jan 24 00:43:46.016784 waagent[1904]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 24 00:43:46.016784 waagent[1904]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 24 00:43:46.016784 waagent[1904]: 2: eth0 inet 10.200.4.34/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 24 00:43:46.016784 waagent[1904]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 24 00:43:46.016784 waagent[1904]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 24 00:43:46.016784 waagent[1904]: 2: eth0 inet6 fe80::7e1e:52ff:fe2d:7db2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 24 00:43:46.061112 waagent[1904]: 2026-01-24T00:43:46.061047Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 24 00:43:46.061112 waagent[1904]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:43:46.061112 waagent[1904]: pkts bytes target prot opt in out source destination Jan 24 00:43:46.061112 waagent[1904]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:43:46.061112 waagent[1904]: pkts bytes target prot opt in out source destination Jan 24 00:43:46.061112 waagent[1904]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:43:46.061112 waagent[1904]: pkts bytes target prot opt in out source destination Jan 24 00:43:46.061112 waagent[1904]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 24 00:43:46.061112 waagent[1904]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 24 00:43:46.061112 waagent[1904]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 24 00:43:46.064397 waagent[1904]: 2026-01-24T00:43:46.064319Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 24 00:43:46.064397 waagent[1904]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:43:46.064397 waagent[1904]: pkts bytes target prot opt in out source destination Jan 24 00:43:46.064397 waagent[1904]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:43:46.064397 waagent[1904]: pkts bytes target prot opt in out source destination Jan 24 00:43:46.064397 waagent[1904]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 24 00:43:46.064397 waagent[1904]: pkts bytes target prot opt in out source destination Jan 24 00:43:46.064397 waagent[1904]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 24 00:43:46.064397 waagent[1904]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 24 00:43:46.064397 waagent[1904]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 24 00:43:46.064774 waagent[1904]: 2026-01-24T00:43:46.064653Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 24 00:43:51.808788 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 24 00:43:51.818531 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:43:51.922448 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:43:51.931629 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:43:52.633650 kubelet[2138]: E0124 00:43:52.633584 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:43:52.637355 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:43:52.637560 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:43:59.987073 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 24 00:43:59.991614 systemd[1]: Started sshd@0-10.200.4.34:22-10.200.16.10:56212.service - OpenSSH per-connection server daemon (10.200.16.10:56212). Jan 24 00:44:00.663369 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 56212 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:44:00.665140 sshd[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:00.669706 systemd-logind[1697]: New session 3 of user core. Jan 24 00:44:00.676481 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 24 00:44:01.193245 systemd[1]: Started sshd@1-10.200.4.34:22-10.200.16.10:56222.service - OpenSSH per-connection server daemon (10.200.16.10:56222). Jan 24 00:44:01.805643 sshd[2151]: Accepted publickey for core from 10.200.16.10 port 56222 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:44:01.807410 sshd[2151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:01.813022 systemd-logind[1697]: New session 4 of user core. Jan 24 00:44:01.820500 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 24 00:44:02.242216 sshd[2151]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:02.246599 systemd[1]: sshd@1-10.200.4.34:22-10.200.16.10:56222.service: Deactivated successfully. Jan 24 00:44:02.248765 systemd[1]: session-4.scope: Deactivated successfully. Jan 24 00:44:02.249601 systemd-logind[1697]: Session 4 logged out. Waiting for processes to exit. Jan 24 00:44:02.250615 systemd-logind[1697]: Removed session 4. Jan 24 00:44:02.349016 systemd[1]: Started sshd@2-10.200.4.34:22-10.200.16.10:56232.service - OpenSSH per-connection server daemon (10.200.16.10:56232). Jan 24 00:44:02.808738 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 24 00:44:02.817686 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:44:02.920205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:44:02.925119 (kubelet)[2168]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:44:02.964198 sshd[2158]: Accepted publickey for core from 10.200.16.10 port 56232 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:44:02.966045 sshd[2158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:02.972582 systemd-logind[1697]: New session 5 of user core. Jan 24 00:44:02.980497 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 24 00:44:03.071747 chronyd[1705]: Selected source PHC0 Jan 24 00:44:03.394717 sshd[2158]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:03.399296 systemd[1]: sshd@2-10.200.4.34:22-10.200.16.10:56232.service: Deactivated successfully. Jan 24 00:44:03.401182 systemd[1]: session-5.scope: Deactivated successfully. Jan 24 00:44:03.402004 systemd-logind[1697]: Session 5 logged out. Waiting for processes to exit. Jan 24 00:44:03.402881 systemd-logind[1697]: Removed session 5. Jan 24 00:44:03.505640 systemd[1]: Started sshd@3-10.200.4.34:22-10.200.16.10:56244.service - OpenSSH per-connection server daemon (10.200.16.10:56244). Jan 24 00:44:03.581711 kubelet[2168]: E0124 00:44:03.581638 2168 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:44:03.584104 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:44:03.584312 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:44:04.102806 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 56244 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:44:04.104222 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:04.109350 systemd-logind[1697]: New session 6 of user core. Jan 24 00:44:04.114506 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 24 00:44:04.532102 sshd[2178]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:04.536900 systemd[1]: sshd@3-10.200.4.34:22-10.200.16.10:56244.service: Deactivated successfully. Jan 24 00:44:04.539138 systemd[1]: session-6.scope: Deactivated successfully. Jan 24 00:44:04.540146 systemd-logind[1697]: Session 6 logged out. Waiting for processes to exit. Jan 24 00:44:04.541225 systemd-logind[1697]: Removed session 6. Jan 24 00:44:04.644685 systemd[1]: Started sshd@4-10.200.4.34:22-10.200.16.10:56246.service - OpenSSH per-connection server daemon (10.200.16.10:56246). Jan 24 00:44:05.244572 sshd[2187]: Accepted publickey for core from 10.200.16.10 port 56246 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:44:05.246316 sshd[2187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:05.250385 systemd-logind[1697]: New session 7 of user core. Jan 24 00:44:05.264467 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 24 00:44:05.794507 sudo[2190]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 24 00:44:05.794959 sudo[2190]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:44:05.822648 sudo[2190]: pam_unix(sudo:session): session closed for user root Jan 24 00:44:05.919675 sshd[2187]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:05.923371 systemd[1]: sshd@4-10.200.4.34:22-10.200.16.10:56246.service: Deactivated successfully. Jan 24 00:44:05.925865 systemd[1]: session-7.scope: Deactivated successfully. Jan 24 00:44:05.927705 systemd-logind[1697]: Session 7 logged out. Waiting for processes to exit. Jan 24 00:44:05.929022 systemd-logind[1697]: Removed session 7. Jan 24 00:44:06.028543 systemd[1]: Started sshd@5-10.200.4.34:22-10.200.16.10:56256.service - OpenSSH per-connection server daemon (10.200.16.10:56256). Jan 24 00:44:06.642398 sshd[2195]: Accepted publickey for core from 10.200.16.10 port 56256 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:44:06.648605 sshd[2195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:06.653465 systemd-logind[1697]: New session 8 of user core. Jan 24 00:44:06.659490 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 24 00:44:06.981158 sudo[2199]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 24 00:44:06.981636 sudo[2199]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:44:06.984903 sudo[2199]: pam_unix(sudo:session): session closed for user root Jan 24 00:44:06.989665 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 24 00:44:06.990001 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:44:07.002651 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 24 00:44:07.004059 auditctl[2202]: No rules Jan 24 00:44:07.004482 systemd[1]: audit-rules.service: Deactivated successfully. Jan 24 00:44:07.004682 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 24 00:44:07.007489 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 24 00:44:07.032696 augenrules[2220]: No rules Jan 24 00:44:07.034061 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 24 00:44:07.035042 sudo[2198]: pam_unix(sudo:session): session closed for user root Jan 24 00:44:07.133098 sshd[2195]: pam_unix(sshd:session): session closed for user core Jan 24 00:44:07.136481 systemd[1]: sshd@5-10.200.4.34:22-10.200.16.10:56256.service: Deactivated successfully. Jan 24 00:44:07.138829 systemd[1]: session-8.scope: Deactivated successfully. Jan 24 00:44:07.140582 systemd-logind[1697]: Session 8 logged out. Waiting for processes to exit. Jan 24 00:44:07.141850 systemd-logind[1697]: Removed session 8. Jan 24 00:44:07.241555 systemd[1]: Started sshd@6-10.200.4.34:22-10.200.16.10:56264.service - OpenSSH per-connection server daemon (10.200.16.10:56264). Jan 24 00:44:07.850313 sshd[2228]: Accepted publickey for core from 10.200.16.10 port 56264 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:44:07.852046 sshd[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:44:07.857637 systemd-logind[1697]: New session 9 of user core. Jan 24 00:44:07.864497 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 24 00:44:08.186975 sudo[2231]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 24 00:44:08.187366 sudo[2231]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 24 00:44:09.578646 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 24 00:44:09.580228 (dockerd)[2246]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 24 00:44:11.390777 dockerd[2246]: time="2026-01-24T00:44:11.390716542Z" level=info msg="Starting up" Jan 24 00:44:11.862202 dockerd[2246]: time="2026-01-24T00:44:11.861958907Z" level=info msg="Loading containers: start." Jan 24 00:44:11.979381 kernel: Initializing XFRM netlink socket Jan 24 00:44:12.145304 systemd-networkd[1356]: docker0: Link UP Jan 24 00:44:12.167488 dockerd[2246]: time="2026-01-24T00:44:12.167454164Z" level=info msg="Loading containers: done." Jan 24 00:44:12.226721 dockerd[2246]: time="2026-01-24T00:44:12.226673068Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 24 00:44:12.226895 dockerd[2246]: time="2026-01-24T00:44:12.226794871Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 24 00:44:12.226952 dockerd[2246]: time="2026-01-24T00:44:12.226919173Z" level=info msg="Daemon has completed initialization" Jan 24 00:44:12.293300 dockerd[2246]: time="2026-01-24T00:44:12.292902138Z" level=info msg="API listen on /run/docker.sock" Jan 24 00:44:12.293161 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 24 00:44:13.483402 containerd[1714]: time="2026-01-24T00:44:13.483358961Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 24 00:44:13.808873 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 24 00:44:13.819555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:44:14.570432 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:44:14.575295 (kubelet)[2390]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:44:14.612119 kubelet[2390]: E0124 00:44:14.612080 2390 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:44:14.614220 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:44:14.614454 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:44:14.955519 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1244262635.mount: Deactivated successfully. Jan 24 00:44:16.559296 containerd[1714]: time="2026-01-24T00:44:16.559245983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:16.561630 containerd[1714]: time="2026-01-24T00:44:16.561440035Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068081" Jan 24 00:44:16.564399 containerd[1714]: time="2026-01-24T00:44:16.564014496Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:16.568112 containerd[1714]: time="2026-01-24T00:44:16.568076192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:16.569246 containerd[1714]: time="2026-01-24T00:44:16.569208819Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 3.085802157s" Jan 24 00:44:16.569356 containerd[1714]: time="2026-01-24T00:44:16.569253620Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 24 00:44:16.569960 containerd[1714]: time="2026-01-24T00:44:16.569823633Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 24 00:44:18.055509 containerd[1714]: time="2026-01-24T00:44:18.055375852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:18.059446 containerd[1714]: time="2026-01-24T00:44:18.058919636Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162448" Jan 24 00:44:18.062982 containerd[1714]: time="2026-01-24T00:44:18.062488221Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:18.068058 containerd[1714]: time="2026-01-24T00:44:18.068019352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:18.069106 containerd[1714]: time="2026-01-24T00:44:18.069071277Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.499210143s" Jan 24 00:44:18.069236 containerd[1714]: time="2026-01-24T00:44:18.069215380Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 24 00:44:18.069970 containerd[1714]: time="2026-01-24T00:44:18.069937898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 24 00:44:19.363475 containerd[1714]: time="2026-01-24T00:44:19.363418862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:19.365727 containerd[1714]: time="2026-01-24T00:44:19.365670516Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725935" Jan 24 00:44:19.368508 containerd[1714]: time="2026-01-24T00:44:19.368462483Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:19.373693 containerd[1714]: time="2026-01-24T00:44:19.373659208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:19.374837 containerd[1714]: time="2026-01-24T00:44:19.374685233Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.304711435s" Jan 24 00:44:19.374837 containerd[1714]: time="2026-01-24T00:44:19.374724034Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 24 00:44:19.375710 containerd[1714]: time="2026-01-24T00:44:19.375683957Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 24 00:44:20.533699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1278461363.mount: Deactivated successfully. Jan 24 00:44:20.934464 containerd[1714]: time="2026-01-24T00:44:20.934403982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:20.937012 containerd[1714]: time="2026-01-24T00:44:20.936960943Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965301" Jan 24 00:44:20.941307 containerd[1714]: time="2026-01-24T00:44:20.940050218Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:20.947665 containerd[1714]: time="2026-01-24T00:44:20.947621800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:20.948441 containerd[1714]: time="2026-01-24T00:44:20.948404519Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.57268826s" Jan 24 00:44:20.948565 containerd[1714]: time="2026-01-24T00:44:20.948544922Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 24 00:44:20.949165 containerd[1714]: time="2026-01-24T00:44:20.949086935Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 24 00:44:21.597690 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695062905.mount: Deactivated successfully. Jan 24 00:44:21.937364 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 24 00:44:22.983405 containerd[1714]: time="2026-01-24T00:44:22.983353208Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:22.985646 containerd[1714]: time="2026-01-24T00:44:22.985454359Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Jan 24 00:44:22.992506 containerd[1714]: time="2026-01-24T00:44:22.992474228Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:22.997653 containerd[1714]: time="2026-01-24T00:44:22.997594651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:22.998841 containerd[1714]: time="2026-01-24T00:44:22.998655376Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.049391937s" Jan 24 00:44:22.998841 containerd[1714]: time="2026-01-24T00:44:22.998693877Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 24 00:44:22.999561 containerd[1714]: time="2026-01-24T00:44:22.999317992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 24 00:44:23.446504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1496025741.mount: Deactivated successfully. Jan 24 00:44:23.465625 containerd[1714]: time="2026-01-24T00:44:23.465581817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:23.468967 containerd[1714]: time="2026-01-24T00:44:23.468917698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Jan 24 00:44:23.473251 containerd[1714]: time="2026-01-24T00:44:23.473203301Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:23.477866 containerd[1714]: time="2026-01-24T00:44:23.477815412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:23.478709 containerd[1714]: time="2026-01-24T00:44:23.478573830Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 479.207537ms" Jan 24 00:44:23.478709 containerd[1714]: time="2026-01-24T00:44:23.478609531Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 24 00:44:23.479102 containerd[1714]: time="2026-01-24T00:44:23.479079642Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 24 00:44:24.032816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189532696.mount: Deactivated successfully. Jan 24 00:44:24.809017 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 24 00:44:24.814561 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:44:24.942507 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:44:24.951736 (kubelet)[2584]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 24 00:44:24.989178 kubelet[2584]: E0124 00:44:24.989128 2584 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 24 00:44:24.991479 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 24 00:44:24.991695 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 24 00:44:25.158843 update_engine[1699]: I20260124 00:44:25.157646 1699 update_attempter.cc:509] Updating boot flags... Jan 24 00:44:26.071292 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2608) Jan 24 00:44:31.098527 containerd[1714]: time="2026-01-24T00:44:31.098466088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:31.101302 containerd[1714]: time="2026-01-24T00:44:31.101247453Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166822" Jan 24 00:44:31.104519 containerd[1714]: time="2026-01-24T00:44:31.104466728Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:31.109773 containerd[1714]: time="2026-01-24T00:44:31.109724151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:31.110947 containerd[1714]: time="2026-01-24T00:44:31.110789976Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 7.63159073s" Jan 24 00:44:31.110947 containerd[1714]: time="2026-01-24T00:44:31.110830877Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 24 00:44:34.106810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:44:34.112606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:44:34.143752 systemd[1]: Reloading requested from client PID 2665 ('systemctl') (unit session-9.scope)... Jan 24 00:44:34.143769 systemd[1]: Reloading... Jan 24 00:44:34.255450 zram_generator::config[2708]: No configuration found. Jan 24 00:44:34.384470 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:44:34.465270 systemd[1]: Reloading finished in 320 ms. Jan 24 00:44:34.577022 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 24 00:44:34.577177 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 24 00:44:34.577626 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:44:34.583639 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:44:35.482296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:44:35.487781 (kubelet)[2772]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:44:35.527350 kubelet[2772]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:44:35.527350 kubelet[2772]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:44:35.527350 kubelet[2772]: I0124 00:44:35.527269 2772 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:44:35.808831 kubelet[2772]: I0124 00:44:35.807263 2772 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:44:35.808831 kubelet[2772]: I0124 00:44:35.807296 2772 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:44:35.808831 kubelet[2772]: I0124 00:44:35.807343 2772 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:44:35.808831 kubelet[2772]: I0124 00:44:35.807354 2772 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:44:35.808831 kubelet[2772]: I0124 00:44:35.807709 2772 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:44:38.309994 kubelet[2772]: E0124 00:44:38.309739 2772 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:44:38.309994 kubelet[2772]: I0124 00:44:38.309917 2772 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:44:39.168448 kubelet[2772]: E0124 00:44:39.168157 2772 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:44:39.168448 kubelet[2772]: I0124 00:44:39.168218 2772 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:44:39.172020 kubelet[2772]: I0124 00:44:39.171981 2772 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:44:39.172828 kubelet[2772]: I0124 00:44:39.172793 2772 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:44:39.173000 kubelet[2772]: I0124 00:44:39.172825 2772 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-e69c55f9b7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:44:39.173213 kubelet[2772]: I0124 00:44:39.173003 2772 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:44:39.173213 kubelet[2772]: I0124 00:44:39.173017 2772 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:44:39.173213 kubelet[2772]: I0124 00:44:39.173134 2772 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:44:39.180870 kubelet[2772]: I0124 00:44:39.180847 2772 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:44:39.182670 kubelet[2772]: I0124 00:44:39.182270 2772 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:44:39.182670 kubelet[2772]: I0124 00:44:39.182298 2772 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:44:39.182670 kubelet[2772]: I0124 00:44:39.182346 2772 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:44:39.182670 kubelet[2772]: I0124 00:44:39.182366 2772 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:44:39.184816 kubelet[2772]: E0124 00:44:39.184550 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-e69c55f9b7&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:44:39.184816 kubelet[2772]: E0124 00:44:39.184696 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:44:39.186539 kubelet[2772]: I0124 00:44:39.186504 2772 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:44:39.187539 kubelet[2772]: I0124 00:44:39.187201 2772 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:44:39.187539 kubelet[2772]: I0124 00:44:39.187253 2772 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:44:39.187539 kubelet[2772]: W0124 00:44:39.187306 2772 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 24 00:44:39.190191 kubelet[2772]: I0124 00:44:39.190177 2772 server.go:1262] "Started kubelet" Jan 24 00:44:39.191618 kubelet[2772]: I0124 00:44:39.191601 2772 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:44:39.195142 kubelet[2772]: I0124 00:44:39.195110 2772 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:44:39.196891 kubelet[2772]: I0124 00:44:39.196866 2772 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:44:39.201153 kubelet[2772]: I0124 00:44:39.201114 2772 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:44:39.201234 kubelet[2772]: I0124 00:44:39.201169 2772 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:44:39.201388 kubelet[2772]: I0124 00:44:39.201369 2772 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:44:39.201927 kubelet[2772]: I0124 00:44:39.201899 2772 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:44:39.203733 kubelet[2772]: I0124 00:44:39.203411 2772 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:44:39.203733 kubelet[2772]: E0124 00:44:39.203606 2772 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" Jan 24 00:44:39.203733 kubelet[2772]: E0124 00:44:39.201570 2772 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-e69c55f9b7.188d841ac8406371 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-e69c55f9b7,UID:ci-4081.3.6-n-e69c55f9b7,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-e69c55f9b7,},FirstTimestamp:2026-01-24 00:44:39.190152049 +0000 UTC m=+3.698745946,LastTimestamp:2026-01-24 00:44:39.190152049 +0000 UTC m=+3.698745946,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-e69c55f9b7,}" Jan 24 00:44:39.206341 kubelet[2772]: E0124 00:44:39.206167 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e69c55f9b7?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="200ms" Jan 24 00:44:39.206745 kubelet[2772]: I0124 00:44:39.206726 2772 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:44:39.206806 kubelet[2772]: I0124 00:44:39.206777 2772 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:44:39.208474 kubelet[2772]: E0124 00:44:39.208243 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:44:39.209531 kubelet[2772]: E0124 00:44:39.209506 2772 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:44:39.211069 kubelet[2772]: I0124 00:44:39.209687 2772 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:44:39.211069 kubelet[2772]: I0124 00:44:39.209700 2772 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:44:39.211069 kubelet[2772]: I0124 00:44:39.209783 2772 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:44:39.219888 kubelet[2772]: I0124 00:44:39.219849 2772 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:44:39.221558 kubelet[2772]: I0124 00:44:39.221530 2772 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:44:39.221558 kubelet[2772]: I0124 00:44:39.221549 2772 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:44:39.221672 kubelet[2772]: I0124 00:44:39.221574 2772 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:44:39.221672 kubelet[2772]: E0124 00:44:39.221623 2772 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:44:39.229725 kubelet[2772]: E0124 00:44:39.229698 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:44:39.248105 kubelet[2772]: I0124 00:44:39.248087 2772 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:44:39.248105 kubelet[2772]: I0124 00:44:39.248101 2772 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:44:39.248214 kubelet[2772]: I0124 00:44:39.248119 2772 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:44:39.253036 kubelet[2772]: I0124 00:44:39.252982 2772 policy_none.go:49] "None policy: Start" Jan 24 00:44:39.253036 kubelet[2772]: I0124 00:44:39.253031 2772 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:44:39.253460 kubelet[2772]: I0124 00:44:39.253073 2772 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:44:39.257848 kubelet[2772]: I0124 00:44:39.257795 2772 policy_none.go:47] "Start" Jan 24 00:44:39.261986 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 24 00:44:39.274317 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 24 00:44:39.277587 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 24 00:44:39.287979 kubelet[2772]: E0124 00:44:39.287960 2772 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:44:39.288543 kubelet[2772]: I0124 00:44:39.288238 2772 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:44:39.288543 kubelet[2772]: I0124 00:44:39.288255 2772 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:44:39.288543 kubelet[2772]: I0124 00:44:39.288467 2772 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:44:39.290001 kubelet[2772]: E0124 00:44:39.289844 2772 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:44:39.290001 kubelet[2772]: E0124 00:44:39.289888 2772 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-e69c55f9b7\" not found" Jan 24 00:44:39.333613 systemd[1]: Created slice kubepods-burstable-pod8f5d0253b445f2ad7bd700e34dc2ea0c.slice - libcontainer container kubepods-burstable-pod8f5d0253b445f2ad7bd700e34dc2ea0c.slice. Jan 24 00:44:39.339962 kubelet[2772]: E0124 00:44:39.339929 2772 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.345218 systemd[1]: Created slice kubepods-burstable-poddfc0fb99fce8863a5ae4ca2f12b8876f.slice - libcontainer container kubepods-burstable-poddfc0fb99fce8863a5ae4ca2f12b8876f.slice. Jan 24 00:44:39.352550 kubelet[2772]: E0124 00:44:39.352354 2772 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.354866 systemd[1]: Created slice kubepods-burstable-pod04e393ef98b8099e929431029d40a74c.slice - libcontainer container kubepods-burstable-pod04e393ef98b8099e929431029d40a74c.slice. Jan 24 00:44:39.356534 kubelet[2772]: E0124 00:44:39.356515 2772 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.390383 kubelet[2772]: I0124 00:44:39.390316 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.390778 kubelet[2772]: E0124 00:44:39.390743 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.34:6443/api/v1/nodes\": dial tcp 10.200.4.34:6443: connect: connection refused" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.407582 kubelet[2772]: I0124 00:44:39.407360 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f5d0253b445f2ad7bd700e34dc2ea0c-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e69c55f9b7\" (UID: \"8f5d0253b445f2ad7bd700e34dc2ea0c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.407582 kubelet[2772]: I0124 00:44:39.407400 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f5d0253b445f2ad7bd700e34dc2ea0c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e69c55f9b7\" (UID: \"8f5d0253b445f2ad7bd700e34dc2ea0c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.407582 kubelet[2772]: I0124 00:44:39.407432 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dfc0fb99fce8863a5ae4ca2f12b8876f-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e69c55f9b7\" (UID: \"dfc0fb99fce8863a5ae4ca2f12b8876f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.407582 kubelet[2772]: E0124 00:44:39.407442 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e69c55f9b7?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="400ms" Jan 24 00:44:39.407582 kubelet[2772]: I0124 00:44:39.407462 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dfc0fb99fce8863a5ae4ca2f12b8876f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-e69c55f9b7\" (UID: \"dfc0fb99fce8863a5ae4ca2f12b8876f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.407881 kubelet[2772]: I0124 00:44:39.407488 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dfc0fb99fce8863a5ae4ca2f12b8876f-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e69c55f9b7\" (UID: \"dfc0fb99fce8863a5ae4ca2f12b8876f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.407881 kubelet[2772]: I0124 00:44:39.407526 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dfc0fb99fce8863a5ae4ca2f12b8876f-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-e69c55f9b7\" (UID: \"dfc0fb99fce8863a5ae4ca2f12b8876f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.407881 kubelet[2772]: I0124 00:44:39.407564 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dfc0fb99fce8863a5ae4ca2f12b8876f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-e69c55f9b7\" (UID: \"dfc0fb99fce8863a5ae4ca2f12b8876f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.407881 kubelet[2772]: I0124 00:44:39.407592 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04e393ef98b8099e929431029d40a74c-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-e69c55f9b7\" (UID: \"04e393ef98b8099e929431029d40a74c\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.407881 kubelet[2772]: I0124 00:44:39.407617 2772 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f5d0253b445f2ad7bd700e34dc2ea0c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-e69c55f9b7\" (UID: \"8f5d0253b445f2ad7bd700e34dc2ea0c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.593638 kubelet[2772]: I0124 00:44:39.593603 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.594057 kubelet[2772]: E0124 00:44:39.594022 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.34:6443/api/v1/nodes\": dial tcp 10.200.4.34:6443: connect: connection refused" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.646634 containerd[1714]: time="2026-01-24T00:44:39.646579875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-e69c55f9b7,Uid:8f5d0253b445f2ad7bd700e34dc2ea0c,Namespace:kube-system,Attempt:0,}" Jan 24 00:44:39.659597 containerd[1714]: time="2026-01-24T00:44:39.659558975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-e69c55f9b7,Uid:dfc0fb99fce8863a5ae4ca2f12b8876f,Namespace:kube-system,Attempt:0,}" Jan 24 00:44:39.664255 containerd[1714]: time="2026-01-24T00:44:39.664214182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-e69c55f9b7,Uid:04e393ef98b8099e929431029d40a74c,Namespace:kube-system,Attempt:0,}" Jan 24 00:44:39.808635 kubelet[2772]: E0124 00:44:39.808583 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e69c55f9b7?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="800ms" Jan 24 00:44:39.996495 kubelet[2772]: I0124 00:44:39.996395 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:39.996979 kubelet[2772]: E0124 00:44:39.996804 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.34:6443/api/v1/nodes\": dial tcp 10.200.4.34:6443: connect: connection refused" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:40.133316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount699274839.mount: Deactivated successfully. Jan 24 00:44:40.156781 containerd[1714]: time="2026-01-24T00:44:40.156705256Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:44:40.159122 containerd[1714]: time="2026-01-24T00:44:40.159076411Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 24 00:44:40.161760 containerd[1714]: time="2026-01-24T00:44:40.161724672Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:44:40.164577 containerd[1714]: time="2026-01-24T00:44:40.164544037Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:44:40.166841 containerd[1714]: time="2026-01-24T00:44:40.166805090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:44:40.169095 containerd[1714]: time="2026-01-24T00:44:40.169041241Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:44:40.172308 containerd[1714]: time="2026-01-24T00:44:40.172268216Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 24 00:44:40.176538 containerd[1714]: time="2026-01-24T00:44:40.176482813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 24 00:44:40.177263 containerd[1714]: time="2026-01-24T00:44:40.177049526Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 530.365949ms" Jan 24 00:44:40.178937 containerd[1714]: time="2026-01-24T00:44:40.178906069Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 514.628085ms" Jan 24 00:44:40.179884 containerd[1714]: time="2026-01-24T00:44:40.179856091Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 520.238615ms" Jan 24 00:44:40.487688 kubelet[2772]: E0124 00:44:40.487644 2772 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.4.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 24 00:44:40.605491 kubelet[2772]: E0124 00:44:40.605436 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.4.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-e69c55f9b7&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 24 00:44:40.609038 kubelet[2772]: E0124 00:44:40.608990 2772 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-e69c55f9b7?timeout=10s\": dial tcp 10.200.4.34:6443: connect: connection refused" interval="1.6s" Jan 24 00:44:40.659793 kubelet[2772]: E0124 00:44:40.659735 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.4.34:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 24 00:44:40.671436 kubelet[2772]: E0124 00:44:40.671401 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.4.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 24 00:44:40.683238 kubelet[2772]: E0124 00:44:40.683198 2772 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.4.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.34:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 24 00:44:40.799121 kubelet[2772]: I0124 00:44:40.799009 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:40.799461 kubelet[2772]: E0124 00:44:40.799430 2772 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.4.34:6443/api/v1/nodes\": dial tcp 10.200.4.34:6443: connect: connection refused" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:40.932618 containerd[1714]: time="2026-01-24T00:44:40.932464473Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:44:40.932618 containerd[1714]: time="2026-01-24T00:44:40.932542074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:44:40.932618 containerd[1714]: time="2026-01-24T00:44:40.932573875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:44:40.933490 containerd[1714]: time="2026-01-24T00:44:40.932829081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:44:40.933490 containerd[1714]: time="2026-01-24T00:44:40.932894583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:44:40.933490 containerd[1714]: time="2026-01-24T00:44:40.932914283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:44:40.933490 containerd[1714]: time="2026-01-24T00:44:40.932705578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:44:40.935216 containerd[1714]: time="2026-01-24T00:44:40.934487919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:44:40.936764 containerd[1714]: time="2026-01-24T00:44:40.936653869Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:44:40.937440 containerd[1714]: time="2026-01-24T00:44:40.937383786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:44:40.937634 containerd[1714]: time="2026-01-24T00:44:40.937584291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:44:40.937956 containerd[1714]: time="2026-01-24T00:44:40.937856497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:44:40.967490 systemd[1]: Started cri-containerd-e88032ff752c49928d01b516de2f6f151b8dd3b757a76c654f0740489214ff44.scope - libcontainer container e88032ff752c49928d01b516de2f6f151b8dd3b757a76c654f0740489214ff44. Jan 24 00:44:40.972554 systemd[1]: Started cri-containerd-151642577a03fa7ec024278484cd3eca2a69469c9d6031a2977ceca78c47bbc5.scope - libcontainer container 151642577a03fa7ec024278484cd3eca2a69469c9d6031a2977ceca78c47bbc5. Jan 24 00:44:40.975232 systemd[1]: Started cri-containerd-52dbe5b5f53e289648696cea89ae417e8b6b0b3c1d446adf3be48be04c928af7.scope - libcontainer container 52dbe5b5f53e289648696cea89ae417e8b6b0b3c1d446adf3be48be04c928af7. Jan 24 00:44:41.037303 containerd[1714]: time="2026-01-24T00:44:41.037113090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-e69c55f9b7,Uid:dfc0fb99fce8863a5ae4ca2f12b8876f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e88032ff752c49928d01b516de2f6f151b8dd3b757a76c654f0740489214ff44\"" Jan 24 00:44:41.054403 containerd[1714]: time="2026-01-24T00:44:41.054225485Z" level=info msg="CreateContainer within sandbox \"e88032ff752c49928d01b516de2f6f151b8dd3b757a76c654f0740489214ff44\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 24 00:44:41.077392 containerd[1714]: time="2026-01-24T00:44:41.077182315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-e69c55f9b7,Uid:8f5d0253b445f2ad7bd700e34dc2ea0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"151642577a03fa7ec024278484cd3eca2a69469c9d6031a2977ceca78c47bbc5\"" Jan 24 00:44:41.082680 containerd[1714]: time="2026-01-24T00:44:41.082312333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-e69c55f9b7,Uid:04e393ef98b8099e929431029d40a74c,Namespace:kube-system,Attempt:0,} returns sandbox id \"52dbe5b5f53e289648696cea89ae417e8b6b0b3c1d446adf3be48be04c928af7\"" Jan 24 00:44:41.089319 containerd[1714]: time="2026-01-24T00:44:41.089300495Z" level=info msg="CreateContainer within sandbox \"151642577a03fa7ec024278484cd3eca2a69469c9d6031a2977ceca78c47bbc5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 24 00:44:41.098835 containerd[1714]: time="2026-01-24T00:44:41.098719112Z" level=info msg="CreateContainer within sandbox \"52dbe5b5f53e289648696cea89ae417e8b6b0b3c1d446adf3be48be04c928af7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 24 00:44:41.112526 containerd[1714]: time="2026-01-24T00:44:41.112495630Z" level=info msg="CreateContainer within sandbox \"e88032ff752c49928d01b516de2f6f151b8dd3b757a76c654f0740489214ff44\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2779c36ab703591c8d89ba9c51a634365043a1b3f5a33f61f40f65505b4dfb02\"" Jan 24 00:44:41.113188 containerd[1714]: time="2026-01-24T00:44:41.113159546Z" level=info msg="StartContainer for \"2779c36ab703591c8d89ba9c51a634365043a1b3f5a33f61f40f65505b4dfb02\"" Jan 24 00:44:41.145796 containerd[1714]: time="2026-01-24T00:44:41.145698397Z" level=info msg="CreateContainer within sandbox \"151642577a03fa7ec024278484cd3eca2a69469c9d6031a2977ceca78c47bbc5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f9681a6cc0bb3cbb80827663c7659e32a025c93bb47f3b6e06840eac7de5030b\"" Jan 24 00:44:41.146187 containerd[1714]: time="2026-01-24T00:44:41.146161008Z" level=info msg="StartContainer for \"f9681a6cc0bb3cbb80827663c7659e32a025c93bb47f3b6e06840eac7de5030b\"" Jan 24 00:44:41.154825 containerd[1714]: time="2026-01-24T00:44:41.154484500Z" level=info msg="CreateContainer within sandbox \"52dbe5b5f53e289648696cea89ae417e8b6b0b3c1d446adf3be48be04c928af7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"42c11664124124c8b4a6743568bd1914e997f31f2b386425d736a63d3f7651d3\"" Jan 24 00:44:41.156636 containerd[1714]: time="2026-01-24T00:44:41.156522547Z" level=info msg="StartContainer for \"42c11664124124c8b4a6743568bd1914e997f31f2b386425d736a63d3f7651d3\"" Jan 24 00:44:41.165896 systemd[1]: Started cri-containerd-2779c36ab703591c8d89ba9c51a634365043a1b3f5a33f61f40f65505b4dfb02.scope - libcontainer container 2779c36ab703591c8d89ba9c51a634365043a1b3f5a33f61f40f65505b4dfb02. Jan 24 00:44:41.200528 systemd[1]: Started cri-containerd-f9681a6cc0bb3cbb80827663c7659e32a025c93bb47f3b6e06840eac7de5030b.scope - libcontainer container f9681a6cc0bb3cbb80827663c7659e32a025c93bb47f3b6e06840eac7de5030b. Jan 24 00:44:41.213497 systemd[1]: Started cri-containerd-42c11664124124c8b4a6743568bd1914e997f31f2b386425d736a63d3f7651d3.scope - libcontainer container 42c11664124124c8b4a6743568bd1914e997f31f2b386425d736a63d3f7651d3. Jan 24 00:44:41.286363 containerd[1714]: time="2026-01-24T00:44:41.285613429Z" level=info msg="StartContainer for \"2779c36ab703591c8d89ba9c51a634365043a1b3f5a33f61f40f65505b4dfb02\" returns successfully" Jan 24 00:44:41.291986 containerd[1714]: time="2026-01-24T00:44:41.291856773Z" level=info msg="StartContainer for \"f9681a6cc0bb3cbb80827663c7659e32a025c93bb47f3b6e06840eac7de5030b\" returns successfully" Jan 24 00:44:41.346719 containerd[1714]: time="2026-01-24T00:44:41.346597637Z" level=info msg="StartContainer for \"42c11664124124c8b4a6743568bd1914e997f31f2b386425d736a63d3f7651d3\" returns successfully" Jan 24 00:44:42.278411 kubelet[2772]: E0124 00:44:42.277962 2772 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:42.283551 kubelet[2772]: E0124 00:44:42.283230 2772 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:42.286406 kubelet[2772]: E0124 00:44:42.286156 2772 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:42.402337 kubelet[2772]: I0124 00:44:42.402186 2772 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:43.289284 kubelet[2772]: E0124 00:44:43.289254 2772 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:43.290234 kubelet[2772]: E0124 00:44:43.289974 2772 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:43.291360 kubelet[2772]: E0124 00:44:43.290650 2772 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:44.292025 kubelet[2772]: E0124 00:44:44.291983 2772 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:44.293475 kubelet[2772]: E0124 00:44:44.293448 2772 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:44.572668 kubelet[2772]: E0124 00:44:44.572630 2772 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-e69c55f9b7\" not found" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:44.616012 kubelet[2772]: I0124 00:44:44.615976 2772 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:44.704220 kubelet[2772]: I0124 00:44:44.703891 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:44.717214 kubelet[2772]: E0124 00:44:44.717181 2772 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-e69c55f9b7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:44.717616 kubelet[2772]: I0124 00:44:44.717392 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:44.720376 kubelet[2772]: E0124 00:44:44.720213 2772 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-e69c55f9b7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:44.720376 kubelet[2772]: I0124 00:44:44.720235 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:44.722268 kubelet[2772]: E0124 00:44:44.722227 2772 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-e69c55f9b7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:45.189301 kubelet[2772]: I0124 00:44:45.189256 2772 apiserver.go:52] "Watching apiserver" Jan 24 00:44:45.206972 kubelet[2772]: I0124 00:44:45.206932 2772 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:44:45.290980 kubelet[2772]: I0124 00:44:45.290946 2772 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:45.292908 kubelet[2772]: E0124 00:44:45.292872 2772 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-e69c55f9b7\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:47.050849 systemd[1]: Reloading requested from client PID 3053 ('systemctl') (unit session-9.scope)... Jan 24 00:44:47.050864 systemd[1]: Reloading... Jan 24 00:44:47.153363 zram_generator::config[3102]: No configuration found. Jan 24 00:44:47.266275 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 24 00:44:47.359570 systemd[1]: Reloading finished in 308 ms. Jan 24 00:44:47.399599 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:44:47.413862 systemd[1]: kubelet.service: Deactivated successfully. Jan 24 00:44:47.414051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:44:47.420642 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 24 00:44:47.682768 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 24 00:44:47.694657 (kubelet)[3160]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 24 00:44:48.204474 kubelet[3160]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 24 00:44:48.204474 kubelet[3160]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 24 00:44:48.204910 kubelet[3160]: I0124 00:44:48.204532 3160 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 24 00:44:48.212786 kubelet[3160]: I0124 00:44:48.212755 3160 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 24 00:44:48.212786 kubelet[3160]: I0124 00:44:48.212783 3160 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 24 00:44:48.212958 kubelet[3160]: I0124 00:44:48.212810 3160 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 24 00:44:48.212958 kubelet[3160]: I0124 00:44:48.212817 3160 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 24 00:44:48.214083 kubelet[3160]: I0124 00:44:48.213068 3160 server.go:956] "Client rotation is on, will bootstrap in background" Jan 24 00:44:48.216138 kubelet[3160]: I0124 00:44:48.215489 3160 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 24 00:44:48.221635 kubelet[3160]: I0124 00:44:48.221607 3160 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 24 00:44:48.229971 kubelet[3160]: E0124 00:44:48.229938 3160 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 24 00:44:48.230059 kubelet[3160]: I0124 00:44:48.229992 3160 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 24 00:44:48.233702 kubelet[3160]: I0124 00:44:48.233683 3160 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 24 00:44:48.233963 kubelet[3160]: I0124 00:44:48.233930 3160 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 24 00:44:48.234154 kubelet[3160]: I0124 00:44:48.233961 3160 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-e69c55f9b7","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 24 00:44:48.234288 kubelet[3160]: I0124 00:44:48.234160 3160 topology_manager.go:138] "Creating topology manager with none policy" Jan 24 00:44:48.234288 kubelet[3160]: I0124 00:44:48.234173 3160 container_manager_linux.go:306] "Creating device plugin manager" Jan 24 00:44:48.234288 kubelet[3160]: I0124 00:44:48.234201 3160 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 24 00:44:48.235005 kubelet[3160]: I0124 00:44:48.234981 3160 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:44:48.235187 kubelet[3160]: I0124 00:44:48.235169 3160 kubelet.go:475] "Attempting to sync node with API server" Jan 24 00:44:48.235187 kubelet[3160]: I0124 00:44:48.235185 3160 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 24 00:44:48.240347 kubelet[3160]: I0124 00:44:48.238357 3160 kubelet.go:387] "Adding apiserver pod source" Jan 24 00:44:48.240347 kubelet[3160]: I0124 00:44:48.238395 3160 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 24 00:44:48.243551 kubelet[3160]: I0124 00:44:48.243527 3160 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 24 00:44:48.244165 kubelet[3160]: I0124 00:44:48.244141 3160 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 24 00:44:48.244241 kubelet[3160]: I0124 00:44:48.244184 3160 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 24 00:44:48.244660 kubelet[3160]: I0124 00:44:48.244644 3160 apiserver.go:52] "Watching apiserver" Jan 24 00:44:48.249302 kubelet[3160]: I0124 00:44:48.249287 3160 server.go:1262] "Started kubelet" Jan 24 00:44:48.251543 kubelet[3160]: I0124 00:44:48.251526 3160 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 24 00:44:48.252109 kubelet[3160]: I0124 00:44:48.252084 3160 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 24 00:44:48.254639 kubelet[3160]: I0124 00:44:48.254622 3160 server.go:310] "Adding debug handlers to kubelet server" Jan 24 00:44:48.261006 kubelet[3160]: I0124 00:44:48.260976 3160 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 24 00:44:48.266269 kubelet[3160]: I0124 00:44:48.266249 3160 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 24 00:44:48.266386 kubelet[3160]: I0124 00:44:48.262178 3160 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 24 00:44:48.266543 kubelet[3160]: I0124 00:44:48.261915 3160 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 24 00:44:48.266650 kubelet[3160]: I0124 00:44:48.266636 3160 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 24 00:44:48.266865 kubelet[3160]: I0124 00:44:48.266852 3160 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 24 00:44:48.267714 kubelet[3160]: I0124 00:44:48.267697 3160 reconciler.go:29] "Reconciler: start to sync state" Jan 24 00:44:48.272030 kubelet[3160]: I0124 00:44:48.271876 3160 factory.go:223] Registration of the systemd container factory successfully Jan 24 00:44:48.273270 kubelet[3160]: I0124 00:44:48.273244 3160 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 24 00:44:48.279091 kubelet[3160]: I0124 00:44:48.279075 3160 factory.go:223] Registration of the containerd container factory successfully Jan 24 00:44:48.281358 kubelet[3160]: E0124 00:44:48.281147 3160 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 24 00:44:48.294164 kubelet[3160]: I0124 00:44:48.293754 3160 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 24 00:44:48.295428 kubelet[3160]: I0124 00:44:48.294905 3160 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 24 00:44:48.295428 kubelet[3160]: I0124 00:44:48.294929 3160 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 24 00:44:48.295428 kubelet[3160]: I0124 00:44:48.294954 3160 kubelet.go:2427] "Starting kubelet main sync loop" Jan 24 00:44:48.295428 kubelet[3160]: E0124 00:44:48.295016 3160 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 24 00:44:48.362275 kubelet[3160]: I0124 00:44:48.362246 3160 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 24 00:44:48.363503 kubelet[3160]: I0124 00:44:48.362650 3160 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 24 00:44:48.363503 kubelet[3160]: I0124 00:44:48.362681 3160 state_mem.go:36] "Initialized new in-memory state store" Jan 24 00:44:48.363503 kubelet[3160]: I0124 00:44:48.362827 3160 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 24 00:44:48.363503 kubelet[3160]: I0124 00:44:48.362838 3160 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 24 00:44:48.363503 kubelet[3160]: I0124 00:44:48.362860 3160 policy_none.go:49] "None policy: Start" Jan 24 00:44:48.363503 kubelet[3160]: I0124 00:44:48.362872 3160 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 24 00:44:48.363503 kubelet[3160]: I0124 00:44:48.362883 3160 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 24 00:44:48.363503 kubelet[3160]: I0124 00:44:48.362988 3160 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 24 00:44:48.363503 kubelet[3160]: I0124 00:44:48.362998 3160 policy_none.go:47] "Start" Jan 24 00:44:48.369074 kubelet[3160]: E0124 00:44:48.368670 3160 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 24 00:44:48.369074 kubelet[3160]: I0124 00:44:48.368852 3160 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 24 00:44:48.369074 kubelet[3160]: I0124 00:44:48.368866 3160 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 24 00:44:48.370654 kubelet[3160]: I0124 00:44:48.370205 3160 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 24 00:44:48.377950 kubelet[3160]: E0124 00:44:48.377915 3160 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 24 00:44:48.404035 kubelet[3160]: I0124 00:44:48.403402 3160 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.404643 kubelet[3160]: I0124 00:44:48.404621 3160 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.405347 kubelet[3160]: I0124 00:44:48.404975 3160 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.417573 kubelet[3160]: I0124 00:44:48.417163 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 00:44:48.420010 kubelet[3160]: I0124 00:44:48.419970 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 00:44:48.420140 kubelet[3160]: I0124 00:44:48.420121 3160 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 24 00:44:48.467066 kubelet[3160]: I0124 00:44:48.466948 3160 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 24 00:44:48.470247 kubelet[3160]: I0124 00:44:48.470210 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8f5d0253b445f2ad7bd700e34dc2ea0c-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e69c55f9b7\" (UID: \"8f5d0253b445f2ad7bd700e34dc2ea0c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.470413 kubelet[3160]: I0124 00:44:48.470252 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/dfc0fb99fce8863a5ae4ca2f12b8876f-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-e69c55f9b7\" (UID: \"dfc0fb99fce8863a5ae4ca2f12b8876f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.470413 kubelet[3160]: I0124 00:44:48.470277 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dfc0fb99fce8863a5ae4ca2f12b8876f-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e69c55f9b7\" (UID: \"dfc0fb99fce8863a5ae4ca2f12b8876f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.470413 kubelet[3160]: I0124 00:44:48.470301 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/04e393ef98b8099e929431029d40a74c-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-e69c55f9b7\" (UID: \"04e393ef98b8099e929431029d40a74c\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.470413 kubelet[3160]: I0124 00:44:48.470345 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8f5d0253b445f2ad7bd700e34dc2ea0c-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-e69c55f9b7\" (UID: \"8f5d0253b445f2ad7bd700e34dc2ea0c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.470413 kubelet[3160]: I0124 00:44:48.470368 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8f5d0253b445f2ad7bd700e34dc2ea0c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-e69c55f9b7\" (UID: \"8f5d0253b445f2ad7bd700e34dc2ea0c\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.470693 kubelet[3160]: I0124 00:44:48.470391 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dfc0fb99fce8863a5ae4ca2f12b8876f-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-e69c55f9b7\" (UID: \"dfc0fb99fce8863a5ae4ca2f12b8876f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.470693 kubelet[3160]: I0124 00:44:48.470425 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/dfc0fb99fce8863a5ae4ca2f12b8876f-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-e69c55f9b7\" (UID: \"dfc0fb99fce8863a5ae4ca2f12b8876f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.470693 kubelet[3160]: I0124 00:44:48.470450 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dfc0fb99fce8863a5ae4ca2f12b8876f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-e69c55f9b7\" (UID: \"dfc0fb99fce8863a5ae4ca2f12b8876f\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.481411 kubelet[3160]: I0124 00:44:48.479927 3160 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.497856 kubelet[3160]: I0124 00:44:48.497829 3160 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:48.498340 kubelet[3160]: I0124 00:44:48.498099 3160 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:44:49.163257 kubelet[3160]: I0124 00:44:49.163176 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-e69c55f9b7" podStartSLOduration=1.163154503 podStartE2EDuration="1.163154503s" podCreationTimestamp="2026-01-24 00:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:44:49.136848307 +0000 UTC m=+1.437966603" watchObservedRunningTime="2026-01-24 00:44:49.163154503 +0000 UTC m=+1.464272699" Jan 24 00:44:49.177353 kubelet[3160]: I0124 00:44:49.175850 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-e69c55f9b7" podStartSLOduration=1.17583469 podStartE2EDuration="1.17583469s" podCreationTimestamp="2026-01-24 00:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:44:49.175571385 +0000 UTC m=+1.476689581" watchObservedRunningTime="2026-01-24 00:44:49.17583469 +0000 UTC m=+1.476952886" Jan 24 00:44:49.177353 kubelet[3160]: I0124 00:44:49.175962 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-e69c55f9b7" podStartSLOduration=1.1759468929999999 podStartE2EDuration="1.175946893s" podCreationTimestamp="2026-01-24 00:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:44:49.16347161 +0000 UTC m=+1.464589806" watchObservedRunningTime="2026-01-24 00:44:49.175946893 +0000 UTC m=+1.477065189" Jan 24 00:44:53.680510 kubelet[3160]: I0124 00:44:53.680474 3160 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 24 00:44:53.681175 containerd[1714]: time="2026-01-24T00:44:53.681108765Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 24 00:44:53.681593 kubelet[3160]: I0124 00:44:53.681568 3160 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 24 00:44:54.683189 systemd[1]: Created slice kubepods-besteffort-podf30dcc50_0948_4968_9d96_a861f2bf2e44.slice - libcontainer container kubepods-besteffort-podf30dcc50_0948_4968_9d96_a861f2bf2e44.slice. Jan 24 00:44:54.711249 kubelet[3160]: I0124 00:44:54.710924 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f30dcc50-0948-4968-9d96-a861f2bf2e44-kube-proxy\") pod \"kube-proxy-r7kcm\" (UID: \"f30dcc50-0948-4968-9d96-a861f2bf2e44\") " pod="kube-system/kube-proxy-r7kcm" Jan 24 00:44:54.711249 kubelet[3160]: I0124 00:44:54.711027 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f30dcc50-0948-4968-9d96-a861f2bf2e44-xtables-lock\") pod \"kube-proxy-r7kcm\" (UID: \"f30dcc50-0948-4968-9d96-a861f2bf2e44\") " pod="kube-system/kube-proxy-r7kcm" Jan 24 00:44:54.711249 kubelet[3160]: I0124 00:44:54.711102 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f30dcc50-0948-4968-9d96-a861f2bf2e44-lib-modules\") pod \"kube-proxy-r7kcm\" (UID: \"f30dcc50-0948-4968-9d96-a861f2bf2e44\") " pod="kube-system/kube-proxy-r7kcm" Jan 24 00:44:54.711249 kubelet[3160]: I0124 00:44:54.711182 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7lt5\" (UniqueName: \"kubernetes.io/projected/f30dcc50-0948-4968-9d96-a861f2bf2e44-kube-api-access-t7lt5\") pod \"kube-proxy-r7kcm\" (UID: \"f30dcc50-0948-4968-9d96-a861f2bf2e44\") " pod="kube-system/kube-proxy-r7kcm" Jan 24 00:44:54.924691 systemd[1]: Created slice kubepods-besteffort-pod0fb61d0a_1a8d_4512_bf0f_49983274f535.slice - libcontainer container kubepods-besteffort-pod0fb61d0a_1a8d_4512_bf0f_49983274f535.slice. Jan 24 00:44:54.996903 containerd[1714]: time="2026-01-24T00:44:54.996270056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r7kcm,Uid:f30dcc50-0948-4968-9d96-a861f2bf2e44,Namespace:kube-system,Attempt:0,}" Jan 24 00:44:55.013430 kubelet[3160]: I0124 00:44:55.013396 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68dxr\" (UniqueName: \"kubernetes.io/projected/0fb61d0a-1a8d-4512-bf0f-49983274f535-kube-api-access-68dxr\") pod \"tigera-operator-65cdcdfd6d-9wkwx\" (UID: \"0fb61d0a-1a8d-4512-bf0f-49983274f535\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-9wkwx" Jan 24 00:44:55.013430 kubelet[3160]: I0124 00:44:55.013436 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0fb61d0a-1a8d-4512-bf0f-49983274f535-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-9wkwx\" (UID: \"0fb61d0a-1a8d-4512-bf0f-49983274f535\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-9wkwx" Jan 24 00:44:55.041019 containerd[1714]: time="2026-01-24T00:44:55.040308450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:44:55.041183 containerd[1714]: time="2026-01-24T00:44:55.041043865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:44:55.041183 containerd[1714]: time="2026-01-24T00:44:55.041083366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:44:55.041375 containerd[1714]: time="2026-01-24T00:44:55.041239769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:44:55.061444 systemd[1]: run-containerd-runc-k8s.io-d9c4a89ab15ac57e1a120284efae0a6b5d19a32c5a2105afe1c89a6c8460dc83-runc.IfrObt.mount: Deactivated successfully. Jan 24 00:44:55.071497 systemd[1]: Started cri-containerd-d9c4a89ab15ac57e1a120284efae0a6b5d19a32c5a2105afe1c89a6c8460dc83.scope - libcontainer container d9c4a89ab15ac57e1a120284efae0a6b5d19a32c5a2105afe1c89a6c8460dc83. Jan 24 00:44:55.092496 containerd[1714]: time="2026-01-24T00:44:55.092288205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-r7kcm,Uid:f30dcc50-0948-4968-9d96-a861f2bf2e44,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9c4a89ab15ac57e1a120284efae0a6b5d19a32c5a2105afe1c89a6c8460dc83\"" Jan 24 00:44:55.105626 containerd[1714]: time="2026-01-24T00:44:55.103896440Z" level=info msg="CreateContainer within sandbox \"d9c4a89ab15ac57e1a120284efae0a6b5d19a32c5a2105afe1c89a6c8460dc83\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 24 00:44:55.148201 containerd[1714]: time="2026-01-24T00:44:55.148097538Z" level=info msg="CreateContainer within sandbox \"d9c4a89ab15ac57e1a120284efae0a6b5d19a32c5a2105afe1c89a6c8460dc83\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6f63b3e2a683e3a356859a3ac1dfbce522fdf8176a858a26f1cdebfa633724a9\"" Jan 24 00:44:55.148858 containerd[1714]: time="2026-01-24T00:44:55.148827652Z" level=info msg="StartContainer for \"6f63b3e2a683e3a356859a3ac1dfbce522fdf8176a858a26f1cdebfa633724a9\"" Jan 24 00:44:55.176569 systemd[1]: Started cri-containerd-6f63b3e2a683e3a356859a3ac1dfbce522fdf8176a858a26f1cdebfa633724a9.scope - libcontainer container 6f63b3e2a683e3a356859a3ac1dfbce522fdf8176a858a26f1cdebfa633724a9. Jan 24 00:44:55.206092 containerd[1714]: time="2026-01-24T00:44:55.206053414Z" level=info msg="StartContainer for \"6f63b3e2a683e3a356859a3ac1dfbce522fdf8176a858a26f1cdebfa633724a9\" returns successfully" Jan 24 00:44:55.233225 containerd[1714]: time="2026-01-24T00:44:55.233174864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-9wkwx,Uid:0fb61d0a-1a8d-4512-bf0f-49983274f535,Namespace:tigera-operator,Attempt:0,}" Jan 24 00:44:55.279766 containerd[1714]: time="2026-01-24T00:44:55.279599406Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:44:55.279766 containerd[1714]: time="2026-01-24T00:44:55.279652907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:44:55.279766 containerd[1714]: time="2026-01-24T00:44:55.279666008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:44:55.280506 containerd[1714]: time="2026-01-24T00:44:55.279747209Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:44:55.303740 systemd[1]: Started cri-containerd-44efa33fd33d1d5f011a54ec282691681e7ef77b24df17c68f99cf061711f0ad.scope - libcontainer container 44efa33fd33d1d5f011a54ec282691681e7ef77b24df17c68f99cf061711f0ad. Jan 24 00:44:55.348404 containerd[1714]: time="2026-01-24T00:44:55.346232659Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-9wkwx,Uid:0fb61d0a-1a8d-4512-bf0f-49983274f535,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"44efa33fd33d1d5f011a54ec282691681e7ef77b24df17c68f99cf061711f0ad\"" Jan 24 00:44:55.350304 containerd[1714]: time="2026-01-24T00:44:55.350097837Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 24 00:44:55.361648 kubelet[3160]: I0124 00:44:55.361584 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-r7kcm" podStartSLOduration=1.36156657 podStartE2EDuration="1.36156657s" podCreationTimestamp="2026-01-24 00:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:44:55.361351566 +0000 UTC m=+7.662469862" watchObservedRunningTime="2026-01-24 00:44:55.36156657 +0000 UTC m=+7.662684866" Jan 24 00:44:57.017907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1052669572.mount: Deactivated successfully. Jan 24 00:44:57.800725 containerd[1714]: time="2026-01-24T00:44:57.800674213Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:57.803175 containerd[1714]: time="2026-01-24T00:44:57.803022365Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 24 00:44:57.806028 containerd[1714]: time="2026-01-24T00:44:57.805967030Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:57.809734 containerd[1714]: time="2026-01-24T00:44:57.809683212Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:44:57.810349 containerd[1714]: time="2026-01-24T00:44:57.810299426Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.460164988s" Jan 24 00:44:57.810424 containerd[1714]: time="2026-01-24T00:44:57.810356527Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 24 00:44:57.822186 containerd[1714]: time="2026-01-24T00:44:57.821849781Z" level=info msg="CreateContainer within sandbox \"44efa33fd33d1d5f011a54ec282691681e7ef77b24df17c68f99cf061711f0ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 24 00:44:57.855731 containerd[1714]: time="2026-01-24T00:44:57.855691530Z" level=info msg="CreateContainer within sandbox \"44efa33fd33d1d5f011a54ec282691681e7ef77b24df17c68f99cf061711f0ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d35a5bee41afa315ab799232c770860d4b47edf0f10f63068a290673d4bb7103\"" Jan 24 00:44:57.856359 containerd[1714]: time="2026-01-24T00:44:57.856214941Z" level=info msg="StartContainer for \"d35a5bee41afa315ab799232c770860d4b47edf0f10f63068a290673d4bb7103\"" Jan 24 00:44:57.888492 systemd[1]: Started cri-containerd-d35a5bee41afa315ab799232c770860d4b47edf0f10f63068a290673d4bb7103.scope - libcontainer container d35a5bee41afa315ab799232c770860d4b47edf0f10f63068a290673d4bb7103. Jan 24 00:44:57.921269 containerd[1714]: time="2026-01-24T00:44:57.920898072Z" level=info msg="StartContainer for \"d35a5bee41afa315ab799232c770860d4b47edf0f10f63068a290673d4bb7103\" returns successfully" Jan 24 00:44:58.397417 kubelet[3160]: I0124 00:44:58.397360 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-9wkwx" podStartSLOduration=1.935697589 podStartE2EDuration="4.397343309s" podCreationTimestamp="2026-01-24 00:44:54 +0000 UTC" firstStartedPulling="2026-01-24 00:44:55.349712429 +0000 UTC m=+7.650830625" lastFinishedPulling="2026-01-24 00:44:57.811358149 +0000 UTC m=+10.112476345" observedRunningTime="2026-01-24 00:44:58.373464381 +0000 UTC m=+10.674582577" watchObservedRunningTime="2026-01-24 00:44:58.397343309 +0000 UTC m=+10.698461505" Jan 24 00:45:04.226189 sudo[2231]: pam_unix(sudo:session): session closed for user root Jan 24 00:45:04.324568 sshd[2228]: pam_unix(sshd:session): session closed for user core Jan 24 00:45:04.332113 systemd[1]: sshd@6-10.200.4.34:22-10.200.16.10:56264.service: Deactivated successfully. Jan 24 00:45:04.332775 systemd-logind[1697]: Session 9 logged out. Waiting for processes to exit. Jan 24 00:45:04.335623 systemd[1]: session-9.scope: Deactivated successfully. Jan 24 00:45:04.336220 systemd[1]: session-9.scope: Consumed 4.638s CPU time, 160.9M memory peak, 0B memory swap peak. Jan 24 00:45:04.337663 systemd-logind[1697]: Removed session 9. Jan 24 00:45:09.958416 systemd[1]: Created slice kubepods-besteffort-pod1343d441_baac_400e_aa53_a142579cdc5b.slice - libcontainer container kubepods-besteffort-pod1343d441_baac_400e_aa53_a142579cdc5b.slice. Jan 24 00:45:10.107866 kubelet[3160]: I0124 00:45:10.107818 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1343d441-baac-400e-aa53-a142579cdc5b-tigera-ca-bundle\") pod \"calico-typha-7c8d8696c9-lr5d8\" (UID: \"1343d441-baac-400e-aa53-a142579cdc5b\") " pod="calico-system/calico-typha-7c8d8696c9-lr5d8" Jan 24 00:45:10.108281 kubelet[3160]: I0124 00:45:10.107873 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sw4r\" (UniqueName: \"kubernetes.io/projected/1343d441-baac-400e-aa53-a142579cdc5b-kube-api-access-6sw4r\") pod \"calico-typha-7c8d8696c9-lr5d8\" (UID: \"1343d441-baac-400e-aa53-a142579cdc5b\") " pod="calico-system/calico-typha-7c8d8696c9-lr5d8" Jan 24 00:45:10.108281 kubelet[3160]: I0124 00:45:10.107897 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1343d441-baac-400e-aa53-a142579cdc5b-typha-certs\") pod \"calico-typha-7c8d8696c9-lr5d8\" (UID: \"1343d441-baac-400e-aa53-a142579cdc5b\") " pod="calico-system/calico-typha-7c8d8696c9-lr5d8" Jan 24 00:45:10.188633 systemd[1]: Created slice kubepods-besteffort-pode49d67eb_e14e_49a9_b1af_a544c0ef91d1.slice - libcontainer container kubepods-besteffort-pode49d67eb_e14e_49a9_b1af_a544c0ef91d1.slice. Jan 24 00:45:10.268507 containerd[1714]: time="2026-01-24T00:45:10.268377881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c8d8696c9-lr5d8,Uid:1343d441-baac-400e-aa53-a142579cdc5b,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:10.310144 kubelet[3160]: I0124 00:45:10.308950 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwxmw\" (UniqueName: \"kubernetes.io/projected/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-kube-api-access-kwxmw\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.310144 kubelet[3160]: I0124 00:45:10.309037 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-xtables-lock\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.310144 kubelet[3160]: I0124 00:45:10.309114 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-lib-modules\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.310144 kubelet[3160]: I0124 00:45:10.309137 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-tigera-ca-bundle\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.310144 kubelet[3160]: I0124 00:45:10.309990 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-node-certs\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.311495 kubelet[3160]: I0124 00:45:10.310028 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-var-run-calico\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.313301 kubelet[3160]: I0124 00:45:10.310729 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-cni-log-dir\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.313301 kubelet[3160]: I0124 00:45:10.312025 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-policysync\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.313301 kubelet[3160]: I0124 00:45:10.312913 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-cni-bin-dir\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.313301 kubelet[3160]: I0124 00:45:10.312979 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-var-lib-calico\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.313301 kubelet[3160]: I0124 00:45:10.313005 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-flexvol-driver-host\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.313775 kubelet[3160]: I0124 00:45:10.313075 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e49d67eb-e14e-49a9-b1af-a544c0ef91d1-cni-net-dir\") pod \"calico-node-bmptt\" (UID: \"e49d67eb-e14e-49a9-b1af-a544c0ef91d1\") " pod="calico-system/calico-node-bmptt" Jan 24 00:45:10.331046 containerd[1714]: time="2026-01-24T00:45:10.330784839Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:10.331046 containerd[1714]: time="2026-01-24T00:45:10.330870041Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:10.331046 containerd[1714]: time="2026-01-24T00:45:10.330886942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:10.331046 containerd[1714]: time="2026-01-24T00:45:10.330971144Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:10.363501 systemd[1]: Started cri-containerd-3b5674eae14ea17670e42d02fd561a810fc6c12af78488fe87faca0c84091afe.scope - libcontainer container 3b5674eae14ea17670e42d02fd561a810fc6c12af78488fe87faca0c84091afe. Jan 24 00:45:10.418242 kubelet[3160]: E0124 00:45:10.418202 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:10.435883 containerd[1714]: time="2026-01-24T00:45:10.435744358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c8d8696c9-lr5d8,Uid:1343d441-baac-400e-aa53-a142579cdc5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"3b5674eae14ea17670e42d02fd561a810fc6c12af78488fe87faca0c84091afe\"" Jan 24 00:45:10.439066 containerd[1714]: time="2026-01-24T00:45:10.439038941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 24 00:45:10.466852 kubelet[3160]: E0124 00:45:10.466797 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.466852 kubelet[3160]: W0124 00:45:10.466821 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.466852 kubelet[3160]: E0124 00:45:10.466847 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.498661 containerd[1714]: time="2026-01-24T00:45:10.498613327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bmptt,Uid:e49d67eb-e14e-49a9-b1af-a544c0ef91d1,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:10.515157 kubelet[3160]: E0124 00:45:10.515122 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.515157 kubelet[3160]: W0124 00:45:10.515150 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.515368 kubelet[3160]: E0124 00:45:10.515176 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.515368 kubelet[3160]: I0124 00:45:10.515206 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k4rh\" (UniqueName: \"kubernetes.io/projected/6289d75a-fb3d-4a7e-b426-fb74d3f97fd2-kube-api-access-6k4rh\") pod \"csi-node-driver-msr6b\" (UID: \"6289d75a-fb3d-4a7e-b426-fb74d3f97fd2\") " pod="calico-system/csi-node-driver-msr6b" Jan 24 00:45:10.516578 kubelet[3160]: E0124 00:45:10.516092 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.516578 kubelet[3160]: W0124 00:45:10.516218 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.516578 kubelet[3160]: E0124 00:45:10.516233 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.517484 kubelet[3160]: I0124 00:45:10.516264 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6289d75a-fb3d-4a7e-b426-fb74d3f97fd2-kubelet-dir\") pod \"csi-node-driver-msr6b\" (UID: \"6289d75a-fb3d-4a7e-b426-fb74d3f97fd2\") " pod="calico-system/csi-node-driver-msr6b" Jan 24 00:45:10.517721 kubelet[3160]: E0124 00:45:10.517702 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.517721 kubelet[3160]: W0124 00:45:10.517720 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.517844 kubelet[3160]: E0124 00:45:10.517735 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.519431 kubelet[3160]: E0124 00:45:10.518011 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.519431 kubelet[3160]: W0124 00:45:10.518040 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.519431 kubelet[3160]: E0124 00:45:10.518053 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.519597 kubelet[3160]: E0124 00:45:10.519548 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.519597 kubelet[3160]: W0124 00:45:10.519561 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.519597 kubelet[3160]: E0124 00:45:10.519591 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.519723 kubelet[3160]: I0124 00:45:10.519624 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6289d75a-fb3d-4a7e-b426-fb74d3f97fd2-socket-dir\") pod \"csi-node-driver-msr6b\" (UID: \"6289d75a-fb3d-4a7e-b426-fb74d3f97fd2\") " pod="calico-system/csi-node-driver-msr6b" Jan 24 00:45:10.520012 kubelet[3160]: E0124 00:45:10.519989 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.520012 kubelet[3160]: W0124 00:45:10.520009 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.520132 kubelet[3160]: E0124 00:45:10.520031 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.520226 kubelet[3160]: I0124 00:45:10.520206 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6289d75a-fb3d-4a7e-b426-fb74d3f97fd2-registration-dir\") pod \"csi-node-driver-msr6b\" (UID: \"6289d75a-fb3d-4a7e-b426-fb74d3f97fd2\") " pod="calico-system/csi-node-driver-msr6b" Jan 24 00:45:10.521894 kubelet[3160]: E0124 00:45:10.521833 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.521894 kubelet[3160]: W0124 00:45:10.521848 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.521894 kubelet[3160]: E0124 00:45:10.521862 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.522299 kubelet[3160]: E0124 00:45:10.522279 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.522299 kubelet[3160]: W0124 00:45:10.522296 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.522443 kubelet[3160]: E0124 00:45:10.522312 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.522598 kubelet[3160]: E0124 00:45:10.522581 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.522598 kubelet[3160]: W0124 00:45:10.522597 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.522711 kubelet[3160]: E0124 00:45:10.522611 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.522756 kubelet[3160]: I0124 00:45:10.522738 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6289d75a-fb3d-4a7e-b426-fb74d3f97fd2-varrun\") pod \"csi-node-driver-msr6b\" (UID: \"6289d75a-fb3d-4a7e-b426-fb74d3f97fd2\") " pod="calico-system/csi-node-driver-msr6b" Jan 24 00:45:10.522913 kubelet[3160]: E0124 00:45:10.522888 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.522913 kubelet[3160]: W0124 00:45:10.522903 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.523015 kubelet[3160]: E0124 00:45:10.522918 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.523579 kubelet[3160]: E0124 00:45:10.523559 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.523579 kubelet[3160]: W0124 00:45:10.523577 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.523705 kubelet[3160]: E0124 00:45:10.523591 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.524109 kubelet[3160]: E0124 00:45:10.524088 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.524109 kubelet[3160]: W0124 00:45:10.524104 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.524214 kubelet[3160]: E0124 00:45:10.524119 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.525399 kubelet[3160]: E0124 00:45:10.525379 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.525399 kubelet[3160]: W0124 00:45:10.525397 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.525522 kubelet[3160]: E0124 00:45:10.525415 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.526584 kubelet[3160]: E0124 00:45:10.526540 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.526584 kubelet[3160]: W0124 00:45:10.526558 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.526584 kubelet[3160]: E0124 00:45:10.526573 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.528427 kubelet[3160]: E0124 00:45:10.528409 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.528427 kubelet[3160]: W0124 00:45:10.528425 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.529453 kubelet[3160]: E0124 00:45:10.528440 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.563421 containerd[1714]: time="2026-01-24T00:45:10.562031710Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:10.563421 containerd[1714]: time="2026-01-24T00:45:10.562096512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:10.563421 containerd[1714]: time="2026-01-24T00:45:10.562111412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:10.563421 containerd[1714]: time="2026-01-24T00:45:10.562188514Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:10.601096 systemd[1]: Started cri-containerd-7a324d3a1bee2d96a445233be6acea6c74199856ab951cab8cffd8ed6a52fff9.scope - libcontainer container 7a324d3a1bee2d96a445233be6acea6c74199856ab951cab8cffd8ed6a52fff9. Jan 24 00:45:10.624295 kubelet[3160]: E0124 00:45:10.624255 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.624295 kubelet[3160]: W0124 00:45:10.624295 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.624498 kubelet[3160]: E0124 00:45:10.624321 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.625160 kubelet[3160]: E0124 00:45:10.624647 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.625160 kubelet[3160]: W0124 00:45:10.624663 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.625160 kubelet[3160]: E0124 00:45:10.624678 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.625160 kubelet[3160]: E0124 00:45:10.624975 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.625160 kubelet[3160]: W0124 00:45:10.625000 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.625160 kubelet[3160]: E0124 00:45:10.625014 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.625494 kubelet[3160]: E0124 00:45:10.625288 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.625494 kubelet[3160]: W0124 00:45:10.625299 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.625494 kubelet[3160]: E0124 00:45:10.625335 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.625620 kubelet[3160]: E0124 00:45:10.625598 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.625620 kubelet[3160]: W0124 00:45:10.625608 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.625706 kubelet[3160]: E0124 00:45:10.625620 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.627349 kubelet[3160]: E0124 00:45:10.625904 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.627349 kubelet[3160]: W0124 00:45:10.625933 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.627349 kubelet[3160]: E0124 00:45:10.625947 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.627349 kubelet[3160]: E0124 00:45:10.626493 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.627349 kubelet[3160]: W0124 00:45:10.626506 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.627349 kubelet[3160]: E0124 00:45:10.626520 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.627349 kubelet[3160]: E0124 00:45:10.626778 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.627349 kubelet[3160]: W0124 00:45:10.626813 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.627349 kubelet[3160]: E0124 00:45:10.626826 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.627349 kubelet[3160]: E0124 00:45:10.627143 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.627827 kubelet[3160]: W0124 00:45:10.627159 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.627827 kubelet[3160]: E0124 00:45:10.627173 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.627827 kubelet[3160]: E0124 00:45:10.627477 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.627827 kubelet[3160]: W0124 00:45:10.627500 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.627827 kubelet[3160]: E0124 00:45:10.627515 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.627827 kubelet[3160]: E0124 00:45:10.627791 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.627827 kubelet[3160]: W0124 00:45:10.627801 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.627827 kubelet[3160]: E0124 00:45:10.627813 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.628166 kubelet[3160]: E0124 00:45:10.628059 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.628166 kubelet[3160]: W0124 00:45:10.628068 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.628166 kubelet[3160]: E0124 00:45:10.628080 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.629084 kubelet[3160]: E0124 00:45:10.629063 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.629084 kubelet[3160]: W0124 00:45:10.629082 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.629212 kubelet[3160]: E0124 00:45:10.629098 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.629486 kubelet[3160]: E0124 00:45:10.629466 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.629486 kubelet[3160]: W0124 00:45:10.629484 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.629605 kubelet[3160]: E0124 00:45:10.629498 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.630591 kubelet[3160]: E0124 00:45:10.630564 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.630591 kubelet[3160]: W0124 00:45:10.630579 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.630591 kubelet[3160]: E0124 00:45:10.630593 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.631345 kubelet[3160]: E0124 00:45:10.630862 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.631345 kubelet[3160]: W0124 00:45:10.630876 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.631345 kubelet[3160]: E0124 00:45:10.630890 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.631345 kubelet[3160]: E0124 00:45:10.631137 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.631345 kubelet[3160]: W0124 00:45:10.631149 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.631345 kubelet[3160]: E0124 00:45:10.631163 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.631668 kubelet[3160]: E0124 00:45:10.631396 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.631668 kubelet[3160]: W0124 00:45:10.631409 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.631668 kubelet[3160]: E0124 00:45:10.631422 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.631668 kubelet[3160]: E0124 00:45:10.631662 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.631833 kubelet[3160]: W0124 00:45:10.631671 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.631833 kubelet[3160]: E0124 00:45:10.631684 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.633832 kubelet[3160]: E0124 00:45:10.633440 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.633832 kubelet[3160]: W0124 00:45:10.633457 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.633832 kubelet[3160]: E0124 00:45:10.633471 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.633832 kubelet[3160]: E0124 00:45:10.633691 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.633832 kubelet[3160]: W0124 00:45:10.633701 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.633832 kubelet[3160]: E0124 00:45:10.633711 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.634152 kubelet[3160]: E0124 00:45:10.633925 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.634152 kubelet[3160]: W0124 00:45:10.633935 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.634152 kubelet[3160]: E0124 00:45:10.633947 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.634285 kubelet[3160]: E0124 00:45:10.634157 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.634285 kubelet[3160]: W0124 00:45:10.634166 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.634285 kubelet[3160]: E0124 00:45:10.634179 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.635282 kubelet[3160]: E0124 00:45:10.634468 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.635282 kubelet[3160]: W0124 00:45:10.634482 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.635282 kubelet[3160]: E0124 00:45:10.634495 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.635282 kubelet[3160]: E0124 00:45:10.634730 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.635282 kubelet[3160]: W0124 00:45:10.634739 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.635282 kubelet[3160]: E0124 00:45:10.634751 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.642077 kubelet[3160]: E0124 00:45:10.642043 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:10.642077 kubelet[3160]: W0124 00:45:10.642062 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:10.642077 kubelet[3160]: E0124 00:45:10.642076 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:10.677884 containerd[1714]: time="2026-01-24T00:45:10.677748598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bmptt,Uid:e49d67eb-e14e-49a9-b1af-a544c0ef91d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"7a324d3a1bee2d96a445233be6acea6c74199856ab951cab8cffd8ed6a52fff9\"" Jan 24 00:45:11.776253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4198720012.mount: Deactivated successfully. Jan 24 00:45:12.296432 kubelet[3160]: E0124 00:45:12.296317 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:13.225405 containerd[1714]: time="2026-01-24T00:45:13.225361471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:13.227561 containerd[1714]: time="2026-01-24T00:45:13.227417417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 24 00:45:13.231417 containerd[1714]: time="2026-01-24T00:45:13.230287682Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:13.237875 containerd[1714]: time="2026-01-24T00:45:13.237697149Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:13.238349 containerd[1714]: time="2026-01-24T00:45:13.238301963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.79878371s" Jan 24 00:45:13.238417 containerd[1714]: time="2026-01-24T00:45:13.238361664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 24 00:45:13.241639 containerd[1714]: time="2026-01-24T00:45:13.241386332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 24 00:45:13.255506 containerd[1714]: time="2026-01-24T00:45:13.255323447Z" level=info msg="CreateContainer within sandbox \"3b5674eae14ea17670e42d02fd561a810fc6c12af78488fe87faca0c84091afe\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 24 00:45:13.295185 containerd[1714]: time="2026-01-24T00:45:13.295137446Z" level=info msg="CreateContainer within sandbox \"3b5674eae14ea17670e42d02fd561a810fc6c12af78488fe87faca0c84091afe\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"85e5edacbf3b18b5934b5a00de603119e25a932064f463e6e83a2c750b64b7c9\"" Jan 24 00:45:13.298056 containerd[1714]: time="2026-01-24T00:45:13.296539377Z" level=info msg="StartContainer for \"85e5edacbf3b18b5934b5a00de603119e25a932064f463e6e83a2c750b64b7c9\"" Jan 24 00:45:13.327491 systemd[1]: Started cri-containerd-85e5edacbf3b18b5934b5a00de603119e25a932064f463e6e83a2c750b64b7c9.scope - libcontainer container 85e5edacbf3b18b5934b5a00de603119e25a932064f463e6e83a2c750b64b7c9. Jan 24 00:45:13.381350 containerd[1714]: time="2026-01-24T00:45:13.380666076Z" level=info msg="StartContainer for \"85e5edacbf3b18b5934b5a00de603119e25a932064f463e6e83a2c750b64b7c9\" returns successfully" Jan 24 00:45:13.405563 kubelet[3160]: I0124 00:45:13.404534 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c8d8696c9-lr5d8" podStartSLOduration=1.603667857 podStartE2EDuration="4.404517215s" podCreationTimestamp="2026-01-24 00:45:09 +0000 UTC" firstStartedPulling="2026-01-24 00:45:10.438718133 +0000 UTC m=+22.739836329" lastFinishedPulling="2026-01-24 00:45:13.239567491 +0000 UTC m=+25.540685687" observedRunningTime="2026-01-24 00:45:13.402846477 +0000 UTC m=+25.703964673" watchObservedRunningTime="2026-01-24 00:45:13.404517215 +0000 UTC m=+25.705635511" Jan 24 00:45:13.437318 kubelet[3160]: E0124 00:45:13.437089 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.437318 kubelet[3160]: W0124 00:45:13.437135 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.437318 kubelet[3160]: E0124 00:45:13.437163 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.438025 kubelet[3160]: E0124 00:45:13.437450 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.438025 kubelet[3160]: W0124 00:45:13.437464 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.438025 kubelet[3160]: E0124 00:45:13.437541 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.438025 kubelet[3160]: E0124 00:45:13.437751 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.438025 kubelet[3160]: W0124 00:45:13.437761 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.438025 kubelet[3160]: E0124 00:45:13.437772 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.438305 kubelet[3160]: E0124 00:45:13.438236 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.438305 kubelet[3160]: W0124 00:45:13.438246 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.438305 kubelet[3160]: E0124 00:45:13.438257 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.438689 kubelet[3160]: E0124 00:45:13.438592 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.438689 kubelet[3160]: W0124 00:45:13.438602 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.438689 kubelet[3160]: E0124 00:45:13.438611 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.438927 kubelet[3160]: E0124 00:45:13.438874 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.438927 kubelet[3160]: W0124 00:45:13.438884 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.438927 kubelet[3160]: E0124 00:45:13.438893 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.439287 kubelet[3160]: E0124 00:45:13.439185 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.439287 kubelet[3160]: W0124 00:45:13.439195 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.439287 kubelet[3160]: E0124 00:45:13.439204 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.439558 kubelet[3160]: E0124 00:45:13.439492 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.439558 kubelet[3160]: W0124 00:45:13.439503 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.439558 kubelet[3160]: E0124 00:45:13.439513 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.439896 kubelet[3160]: E0124 00:45:13.439807 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.439896 kubelet[3160]: W0124 00:45:13.439817 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.439896 kubelet[3160]: E0124 00:45:13.439826 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.440181 kubelet[3160]: E0124 00:45:13.440086 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.440181 kubelet[3160]: W0124 00:45:13.440096 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.440181 kubelet[3160]: E0124 00:45:13.440106 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.441009 kubelet[3160]: E0124 00:45:13.440913 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.441009 kubelet[3160]: W0124 00:45:13.440939 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.441009 kubelet[3160]: E0124 00:45:13.440953 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.441509 kubelet[3160]: E0124 00:45:13.441412 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.441509 kubelet[3160]: W0124 00:45:13.441427 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.441509 kubelet[3160]: E0124 00:45:13.441443 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.442187 kubelet[3160]: E0124 00:45:13.441973 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.442187 kubelet[3160]: W0124 00:45:13.441989 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.442187 kubelet[3160]: E0124 00:45:13.442016 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.442719 kubelet[3160]: E0124 00:45:13.442474 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.442719 kubelet[3160]: W0124 00:45:13.442499 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.442719 kubelet[3160]: E0124 00:45:13.442519 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.443169 kubelet[3160]: E0124 00:45:13.443073 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.443169 kubelet[3160]: W0124 00:45:13.443086 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.443169 kubelet[3160]: E0124 00:45:13.443098 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.449497 kubelet[3160]: E0124 00:45:13.449481 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.449701 kubelet[3160]: W0124 00:45:13.449593 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.449701 kubelet[3160]: E0124 00:45:13.449614 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.450445 kubelet[3160]: E0124 00:45:13.450271 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.450445 kubelet[3160]: W0124 00:45:13.450286 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.450445 kubelet[3160]: E0124 00:45:13.450300 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.451019 kubelet[3160]: E0124 00:45:13.450929 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.451019 kubelet[3160]: W0124 00:45:13.450946 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.451019 kubelet[3160]: E0124 00:45:13.450961 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.451602 kubelet[3160]: E0124 00:45:13.451445 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.451602 kubelet[3160]: W0124 00:45:13.451461 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.451602 kubelet[3160]: E0124 00:45:13.451476 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.451985 kubelet[3160]: E0124 00:45:13.451881 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.451985 kubelet[3160]: W0124 00:45:13.451892 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.451985 kubelet[3160]: E0124 00:45:13.451902 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.453796 kubelet[3160]: E0124 00:45:13.453488 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.453796 kubelet[3160]: W0124 00:45:13.453503 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.453796 kubelet[3160]: E0124 00:45:13.453517 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.453796 kubelet[3160]: E0124 00:45:13.453749 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.453796 kubelet[3160]: W0124 00:45:13.453761 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.453796 kubelet[3160]: E0124 00:45:13.453775 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.454089 kubelet[3160]: E0124 00:45:13.453964 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.454089 kubelet[3160]: W0124 00:45:13.453975 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.454089 kubelet[3160]: E0124 00:45:13.453988 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.454231 kubelet[3160]: E0124 00:45:13.454211 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.454231 kubelet[3160]: W0124 00:45:13.454221 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.455728 kubelet[3160]: E0124 00:45:13.454233 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.455728 kubelet[3160]: E0124 00:45:13.454461 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.455728 kubelet[3160]: W0124 00:45:13.454473 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.455728 kubelet[3160]: E0124 00:45:13.454488 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.455728 kubelet[3160]: E0124 00:45:13.454663 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.455728 kubelet[3160]: W0124 00:45:13.454672 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.455728 kubelet[3160]: E0124 00:45:13.454683 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.457553 kubelet[3160]: E0124 00:45:13.457411 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.457553 kubelet[3160]: W0124 00:45:13.457427 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.457553 kubelet[3160]: E0124 00:45:13.457441 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.457951 kubelet[3160]: E0124 00:45:13.457884 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.457951 kubelet[3160]: W0124 00:45:13.457899 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.457951 kubelet[3160]: E0124 00:45:13.457913 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.458212 kubelet[3160]: E0124 00:45:13.458165 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.458212 kubelet[3160]: W0124 00:45:13.458176 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.458212 kubelet[3160]: E0124 00:45:13.458192 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.458463 kubelet[3160]: E0124 00:45:13.458445 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.458463 kubelet[3160]: W0124 00:45:13.458460 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.458591 kubelet[3160]: E0124 00:45:13.458474 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.458701 kubelet[3160]: E0124 00:45:13.458681 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.458701 kubelet[3160]: W0124 00:45:13.458695 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.458809 kubelet[3160]: E0124 00:45:13.458707 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.458959 kubelet[3160]: E0124 00:45:13.458924 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.458959 kubelet[3160]: W0124 00:45:13.458938 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.458959 kubelet[3160]: E0124 00:45:13.458950 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:13.459379 kubelet[3160]: E0124 00:45:13.459359 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:13.459379 kubelet[3160]: W0124 00:45:13.459374 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:13.459498 kubelet[3160]: E0124 00:45:13.459388 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.296720 kubelet[3160]: E0124 00:45:14.296313 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:14.388238 kubelet[3160]: I0124 00:45:14.388204 3160 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:45:14.449607 kubelet[3160]: E0124 00:45:14.449575 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.449607 kubelet[3160]: W0124 00:45:14.449598 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.449607 kubelet[3160]: E0124 00:45:14.449623 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.450382 kubelet[3160]: E0124 00:45:14.449890 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.450382 kubelet[3160]: W0124 00:45:14.449902 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.450382 kubelet[3160]: E0124 00:45:14.449918 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.450382 kubelet[3160]: E0124 00:45:14.450148 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.450382 kubelet[3160]: W0124 00:45:14.450160 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.450382 kubelet[3160]: E0124 00:45:14.450175 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.450731 kubelet[3160]: E0124 00:45:14.450406 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.450731 kubelet[3160]: W0124 00:45:14.450416 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.450731 kubelet[3160]: E0124 00:45:14.450430 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.450731 kubelet[3160]: E0124 00:45:14.450667 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.450731 kubelet[3160]: W0124 00:45:14.450678 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.450731 kubelet[3160]: E0124 00:45:14.450691 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.451015 kubelet[3160]: E0124 00:45:14.450885 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.451015 kubelet[3160]: W0124 00:45:14.450895 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.451015 kubelet[3160]: E0124 00:45:14.450907 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.451197 kubelet[3160]: E0124 00:45:14.451092 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.451197 kubelet[3160]: W0124 00:45:14.451101 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.451197 kubelet[3160]: E0124 00:45:14.451112 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.451627 kubelet[3160]: E0124 00:45:14.451608 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.451627 kubelet[3160]: W0124 00:45:14.451622 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.451772 kubelet[3160]: E0124 00:45:14.451636 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.452010 kubelet[3160]: E0124 00:45:14.451846 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.452010 kubelet[3160]: W0124 00:45:14.451857 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.452010 kubelet[3160]: E0124 00:45:14.451870 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.452267 kubelet[3160]: E0124 00:45:14.452066 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.452267 kubelet[3160]: W0124 00:45:14.452077 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.452267 kubelet[3160]: E0124 00:45:14.452091 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.452580 kubelet[3160]: E0124 00:45:14.452269 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.452580 kubelet[3160]: W0124 00:45:14.452279 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.452580 kubelet[3160]: E0124 00:45:14.452291 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.452580 kubelet[3160]: E0124 00:45:14.452537 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.452580 kubelet[3160]: W0124 00:45:14.452547 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.452580 kubelet[3160]: E0124 00:45:14.452560 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.452952 kubelet[3160]: E0124 00:45:14.452749 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.452952 kubelet[3160]: W0124 00:45:14.452760 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.452952 kubelet[3160]: E0124 00:45:14.452771 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.452952 kubelet[3160]: E0124 00:45:14.452942 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.452952 kubelet[3160]: W0124 00:45:14.452951 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.453244 kubelet[3160]: E0124 00:45:14.452962 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.453244 kubelet[3160]: E0124 00:45:14.453142 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.453244 kubelet[3160]: W0124 00:45:14.453153 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.453244 kubelet[3160]: E0124 00:45:14.453165 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.462745 kubelet[3160]: E0124 00:45:14.462639 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.462745 kubelet[3160]: W0124 00:45:14.462655 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.462745 kubelet[3160]: E0124 00:45:14.462670 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.463226 kubelet[3160]: E0124 00:45:14.463095 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.463226 kubelet[3160]: W0124 00:45:14.463109 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.463226 kubelet[3160]: E0124 00:45:14.463123 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.463669 kubelet[3160]: E0124 00:45:14.463508 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.463669 kubelet[3160]: W0124 00:45:14.463523 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.463669 kubelet[3160]: E0124 00:45:14.463536 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.464159 kubelet[3160]: E0124 00:45:14.463965 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.464159 kubelet[3160]: W0124 00:45:14.464006 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.464159 kubelet[3160]: E0124 00:45:14.464021 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.464697 kubelet[3160]: E0124 00:45:14.464475 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.464697 kubelet[3160]: W0124 00:45:14.464489 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.464697 kubelet[3160]: E0124 00:45:14.464504 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.465130 kubelet[3160]: E0124 00:45:14.464917 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.465130 kubelet[3160]: W0124 00:45:14.464931 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.465130 kubelet[3160]: E0124 00:45:14.464946 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.465426 kubelet[3160]: E0124 00:45:14.465303 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.465426 kubelet[3160]: W0124 00:45:14.465316 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.465426 kubelet[3160]: E0124 00:45:14.465347 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.465891 kubelet[3160]: E0124 00:45:14.465717 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.465891 kubelet[3160]: W0124 00:45:14.465729 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.465891 kubelet[3160]: E0124 00:45:14.465741 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.466245 kubelet[3160]: E0124 00:45:14.466070 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.466245 kubelet[3160]: W0124 00:45:14.466081 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.466245 kubelet[3160]: E0124 00:45:14.466093 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.466600 kubelet[3160]: E0124 00:45:14.466465 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.466600 kubelet[3160]: W0124 00:45:14.466477 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.466600 kubelet[3160]: E0124 00:45:14.466489 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.467120 kubelet[3160]: E0124 00:45:14.466846 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.467120 kubelet[3160]: W0124 00:45:14.466857 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.467120 kubelet[3160]: E0124 00:45:14.466869 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.467389 kubelet[3160]: E0124 00:45:14.467153 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.467389 kubelet[3160]: W0124 00:45:14.467165 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.467389 kubelet[3160]: E0124 00:45:14.467180 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.468625 kubelet[3160]: E0124 00:45:14.468467 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.468625 kubelet[3160]: W0124 00:45:14.468482 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.468625 kubelet[3160]: E0124 00:45:14.468503 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.469195 kubelet[3160]: E0124 00:45:14.469074 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.469195 kubelet[3160]: W0124 00:45:14.469086 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.469195 kubelet[3160]: E0124 00:45:14.469099 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.469906 kubelet[3160]: E0124 00:45:14.469533 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.469906 kubelet[3160]: W0124 00:45:14.469546 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.469906 kubelet[3160]: E0124 00:45:14.469560 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.470205 kubelet[3160]: E0124 00:45:14.470187 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.470205 kubelet[3160]: W0124 00:45:14.470200 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.470355 kubelet[3160]: E0124 00:45:14.470214 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.470449 kubelet[3160]: E0124 00:45:14.470431 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.470449 kubelet[3160]: W0124 00:45:14.470444 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.470559 kubelet[3160]: E0124 00:45:14.470457 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.470696 kubelet[3160]: E0124 00:45:14.470679 3160 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 24 00:45:14.470696 kubelet[3160]: W0124 00:45:14.470692 3160 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 24 00:45:14.470774 kubelet[3160]: E0124 00:45:14.470706 3160 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 24 00:45:14.671380 containerd[1714]: time="2026-01-24T00:45:14.671319709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:14.673734 containerd[1714]: time="2026-01-24T00:45:14.673604661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 24 00:45:14.677293 containerd[1714]: time="2026-01-24T00:45:14.677238843Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:14.681862 containerd[1714]: time="2026-01-24T00:45:14.681809346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:14.682859 containerd[1714]: time="2026-01-24T00:45:14.682335758Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.440899924s" Jan 24 00:45:14.682859 containerd[1714]: time="2026-01-24T00:45:14.682378159Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 24 00:45:14.689799 containerd[1714]: time="2026-01-24T00:45:14.689772026Z" level=info msg="CreateContainer within sandbox \"7a324d3a1bee2d96a445233be6acea6c74199856ab951cab8cffd8ed6a52fff9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 24 00:45:14.731487 containerd[1714]: time="2026-01-24T00:45:14.731446666Z" level=info msg="CreateContainer within sandbox \"7a324d3a1bee2d96a445233be6acea6c74199856ab951cab8cffd8ed6a52fff9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"65b786535a9e2cd577e90adfad4c45c64d2be464c1b22f402606fe570924369d\"" Jan 24 00:45:14.732112 containerd[1714]: time="2026-01-24T00:45:14.732071480Z" level=info msg="StartContainer for \"65b786535a9e2cd577e90adfad4c45c64d2be464c1b22f402606fe570924369d\"" Jan 24 00:45:14.767693 systemd[1]: Started cri-containerd-65b786535a9e2cd577e90adfad4c45c64d2be464c1b22f402606fe570924369d.scope - libcontainer container 65b786535a9e2cd577e90adfad4c45c64d2be464c1b22f402606fe570924369d. Jan 24 00:45:14.796977 containerd[1714]: time="2026-01-24T00:45:14.796927144Z" level=info msg="StartContainer for \"65b786535a9e2cd577e90adfad4c45c64d2be464c1b22f402606fe570924369d\" returns successfully" Jan 24 00:45:14.804203 systemd[1]: cri-containerd-65b786535a9e2cd577e90adfad4c45c64d2be464c1b22f402606fe570924369d.scope: Deactivated successfully. Jan 24 00:45:14.826111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-65b786535a9e2cd577e90adfad4c45c64d2be464c1b22f402606fe570924369d-rootfs.mount: Deactivated successfully. Jan 24 00:45:16.297370 kubelet[3160]: E0124 00:45:16.296639 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:16.318927 containerd[1714]: time="2026-01-24T00:45:16.318863392Z" level=info msg="shim disconnected" id=65b786535a9e2cd577e90adfad4c45c64d2be464c1b22f402606fe570924369d namespace=k8s.io Jan 24 00:45:16.318927 containerd[1714]: time="2026-01-24T00:45:16.318922894Z" level=warning msg="cleaning up after shim disconnected" id=65b786535a9e2cd577e90adfad4c45c64d2be464c1b22f402606fe570924369d namespace=k8s.io Jan 24 00:45:16.319413 containerd[1714]: time="2026-01-24T00:45:16.318936894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:45:16.331212 containerd[1714]: time="2026-01-24T00:45:16.331171000Z" level=warning msg="cleanup warnings time=\"2026-01-24T00:45:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 24 00:45:16.396883 containerd[1714]: time="2026-01-24T00:45:16.396308229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 24 00:45:18.296242 kubelet[3160]: E0124 00:45:18.295796 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:20.296359 kubelet[3160]: E0124 00:45:20.295512 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:20.776689 containerd[1714]: time="2026-01-24T00:45:20.776643454Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:20.779678 containerd[1714]: time="2026-01-24T00:45:20.779536427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 24 00:45:20.783949 containerd[1714]: time="2026-01-24T00:45:20.783875035Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:20.788001 containerd[1714]: time="2026-01-24T00:45:20.787803133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:20.788874 containerd[1714]: time="2026-01-24T00:45:20.788751157Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 4.392387927s" Jan 24 00:45:20.788874 containerd[1714]: time="2026-01-24T00:45:20.788788458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 24 00:45:20.797192 containerd[1714]: time="2026-01-24T00:45:20.797161167Z" level=info msg="CreateContainer within sandbox \"7a324d3a1bee2d96a445233be6acea6c74199856ab951cab8cffd8ed6a52fff9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 24 00:45:20.840027 containerd[1714]: time="2026-01-24T00:45:20.839909336Z" level=info msg="CreateContainer within sandbox \"7a324d3a1bee2d96a445233be6acea6c74199856ab951cab8cffd8ed6a52fff9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c2337a51a7ce68029ecaefe83863138dc958e27c3f89d6dd47835cd2a358fee6\"" Jan 24 00:45:20.840613 containerd[1714]: time="2026-01-24T00:45:20.840553952Z" level=info msg="StartContainer for \"c2337a51a7ce68029ecaefe83863138dc958e27c3f89d6dd47835cd2a358fee6\"" Jan 24 00:45:20.875169 systemd[1]: run-containerd-runc-k8s.io-c2337a51a7ce68029ecaefe83863138dc958e27c3f89d6dd47835cd2a358fee6-runc.wKjQdH.mount: Deactivated successfully. Jan 24 00:45:20.885780 systemd[1]: Started cri-containerd-c2337a51a7ce68029ecaefe83863138dc958e27c3f89d6dd47835cd2a358fee6.scope - libcontainer container c2337a51a7ce68029ecaefe83863138dc958e27c3f89d6dd47835cd2a358fee6. Jan 24 00:45:20.920471 containerd[1714]: time="2026-01-24T00:45:20.920430549Z" level=info msg="StartContainer for \"c2337a51a7ce68029ecaefe83863138dc958e27c3f89d6dd47835cd2a358fee6\" returns successfully" Jan 24 00:45:22.296874 kubelet[3160]: E0124 00:45:22.295694 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:22.565907 systemd[1]: cri-containerd-c2337a51a7ce68029ecaefe83863138dc958e27c3f89d6dd47835cd2a358fee6.scope: Deactivated successfully. Jan 24 00:45:22.580660 kubelet[3160]: I0124 00:45:22.579893 3160 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 24 00:45:22.592168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2337a51a7ce68029ecaefe83863138dc958e27c3f89d6dd47835cd2a358fee6-rootfs.mount: Deactivated successfully. Jan 24 00:45:23.774096 systemd[1]: Created slice kubepods-burstable-podf6f98da9_ca79_4902_82ee_3f5271b4428b.slice - libcontainer container kubepods-burstable-podf6f98da9_ca79_4902_82ee_3f5271b4428b.slice. Jan 24 00:45:23.787839 systemd[1]: Created slice kubepods-besteffort-pod1c848a50_4637_48b0_8299_82d8998eb7e8.slice - libcontainer container kubepods-besteffort-pod1c848a50_4637_48b0_8299_82d8998eb7e8.slice. Jan 24 00:45:23.794592 systemd[1]: Created slice kubepods-besteffort-pod6289d75a_fb3d_4a7e_b426_fb74d3f97fd2.slice - libcontainer container kubepods-besteffort-pod6289d75a_fb3d_4a7e_b426_fb74d3f97fd2.slice. Jan 24 00:45:23.810426 containerd[1714]: time="2026-01-24T00:45:23.810356797Z" level=info msg="shim disconnected" id=c2337a51a7ce68029ecaefe83863138dc958e27c3f89d6dd47835cd2a358fee6 namespace=k8s.io Jan 24 00:45:23.810821 containerd[1714]: time="2026-01-24T00:45:23.810427599Z" level=warning msg="cleaning up after shim disconnected" id=c2337a51a7ce68029ecaefe83863138dc958e27c3f89d6dd47835cd2a358fee6 namespace=k8s.io Jan 24 00:45:23.810821 containerd[1714]: time="2026-01-24T00:45:23.810440299Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 24 00:45:23.814445 systemd[1]: Created slice kubepods-besteffort-podea8e1ae1_59b4_45f9_9265_2981e79d3abb.slice - libcontainer container kubepods-besteffort-podea8e1ae1_59b4_45f9_9265_2981e79d3abb.slice. Jan 24 00:45:23.820312 containerd[1714]: time="2026-01-24T00:45:23.820272713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msr6b,Uid:6289d75a-fb3d-4a7e-b426-fb74d3f97fd2,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:23.831433 systemd[1]: Created slice kubepods-besteffort-pod3b4d50cd_bfa9_4817_b2aa_6df460bb529b.slice - libcontainer container kubepods-besteffort-pod3b4d50cd_bfa9_4817_b2aa_6df460bb529b.slice. Jan 24 00:45:23.834594 kubelet[3160]: I0124 00:45:23.834546 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gp9w8\" (UniqueName: \"kubernetes.io/projected/ea8e1ae1-59b4-45f9-9265-2981e79d3abb-kube-api-access-gp9w8\") pod \"calico-kube-controllers-5598cf5ccb-2mj7w\" (UID: \"ea8e1ae1-59b4-45f9-9265-2981e79d3abb\") " pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" Jan 24 00:45:23.836637 kubelet[3160]: I0124 00:45:23.834615 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6f98da9-ca79-4902-82ee-3f5271b4428b-config-volume\") pod \"coredns-66bc5c9577-4lbk4\" (UID: \"f6f98da9-ca79-4902-82ee-3f5271b4428b\") " pod="kube-system/coredns-66bc5c9577-4lbk4" Jan 24 00:45:23.836637 kubelet[3160]: I0124 00:45:23.834660 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b4d50cd-bfa9-4817-b2aa-6df460bb529b-calico-apiserver-certs\") pod \"calico-apiserver-64999767c9-w9j7d\" (UID: \"3b4d50cd-bfa9-4817-b2aa-6df460bb529b\") " pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" Jan 24 00:45:23.836637 kubelet[3160]: I0124 00:45:23.834687 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htj75\" (UniqueName: \"kubernetes.io/projected/3b4d50cd-bfa9-4817-b2aa-6df460bb529b-kube-api-access-htj75\") pod \"calico-apiserver-64999767c9-w9j7d\" (UID: \"3b4d50cd-bfa9-4817-b2aa-6df460bb529b\") " pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" Jan 24 00:45:23.836637 kubelet[3160]: I0124 00:45:23.834744 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c848a50-4637-48b0-8299-82d8998eb7e8-whisker-ca-bundle\") pod \"whisker-5784f66b48-622qn\" (UID: \"1c848a50-4637-48b0-8299-82d8998eb7e8\") " pod="calico-system/whisker-5784f66b48-622qn" Jan 24 00:45:23.836637 kubelet[3160]: I0124 00:45:23.834769 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c848a50-4637-48b0-8299-82d8998eb7e8-whisker-backend-key-pair\") pod \"whisker-5784f66b48-622qn\" (UID: \"1c848a50-4637-48b0-8299-82d8998eb7e8\") " pod="calico-system/whisker-5784f66b48-622qn" Jan 24 00:45:23.837402 kubelet[3160]: I0124 00:45:23.834812 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwstl\" (UniqueName: \"kubernetes.io/projected/f6f98da9-ca79-4902-82ee-3f5271b4428b-kube-api-access-vwstl\") pod \"coredns-66bc5c9577-4lbk4\" (UID: \"f6f98da9-ca79-4902-82ee-3f5271b4428b\") " pod="kube-system/coredns-66bc5c9577-4lbk4" Jan 24 00:45:23.837402 kubelet[3160]: I0124 00:45:23.834836 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea8e1ae1-59b4-45f9-9265-2981e79d3abb-tigera-ca-bundle\") pod \"calico-kube-controllers-5598cf5ccb-2mj7w\" (UID: \"ea8e1ae1-59b4-45f9-9265-2981e79d3abb\") " pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" Jan 24 00:45:23.837402 kubelet[3160]: I0124 00:45:23.834858 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59hnd\" (UniqueName: \"kubernetes.io/projected/1c848a50-4637-48b0-8299-82d8998eb7e8-kube-api-access-59hnd\") pod \"whisker-5784f66b48-622qn\" (UID: \"1c848a50-4637-48b0-8299-82d8998eb7e8\") " pod="calico-system/whisker-5784f66b48-622qn" Jan 24 00:45:23.848299 systemd[1]: Created slice kubepods-besteffort-pod7b33a64f_b7f5_40bf_8d4e_99b72fa6bbe9.slice - libcontainer container kubepods-besteffort-pod7b33a64f_b7f5_40bf_8d4e_99b72fa6bbe9.slice. Jan 24 00:45:23.861999 systemd[1]: Created slice kubepods-besteffort-pod60f29bc1_01eb_4e81_a219_3085d4f87052.slice - libcontainer container kubepods-besteffort-pod60f29bc1_01eb_4e81_a219_3085d4f87052.slice. Jan 24 00:45:23.874903 systemd[1]: Created slice kubepods-burstable-podf3b0e4f7_4203_4a9e_8024_a24d4365a71d.slice - libcontainer container kubepods-burstable-podf3b0e4f7_4203_4a9e_8024_a24d4365a71d.slice. Jan 24 00:45:23.935700 kubelet[3160]: I0124 00:45:23.935662 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/60f29bc1-01eb-4e81-a219-3085d4f87052-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-wr4rk\" (UID: \"60f29bc1-01eb-4e81-a219-3085d4f87052\") " pod="calico-system/goldmane-7c778bb748-wr4rk" Jan 24 00:45:23.935867 kubelet[3160]: I0124 00:45:23.935714 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f3b0e4f7-4203-4a9e-8024-a24d4365a71d-config-volume\") pod \"coredns-66bc5c9577-98wpg\" (UID: \"f3b0e4f7-4203-4a9e-8024-a24d4365a71d\") " pod="kube-system/coredns-66bc5c9577-98wpg" Jan 24 00:45:23.935867 kubelet[3160]: I0124 00:45:23.935772 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60f29bc1-01eb-4e81-a219-3085d4f87052-config\") pod \"goldmane-7c778bb748-wr4rk\" (UID: \"60f29bc1-01eb-4e81-a219-3085d4f87052\") " pod="calico-system/goldmane-7c778bb748-wr4rk" Jan 24 00:45:23.935867 kubelet[3160]: I0124 00:45:23.935810 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kffdg\" (UniqueName: \"kubernetes.io/projected/60f29bc1-01eb-4e81-a219-3085d4f87052-kube-api-access-kffdg\") pod \"goldmane-7c778bb748-wr4rk\" (UID: \"60f29bc1-01eb-4e81-a219-3085d4f87052\") " pod="calico-system/goldmane-7c778bb748-wr4rk" Jan 24 00:45:23.935867 kubelet[3160]: I0124 00:45:23.935830 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9-calico-apiserver-certs\") pod \"calico-apiserver-64999767c9-nk8rp\" (UID: \"7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9\") " pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" Jan 24 00:45:23.935867 kubelet[3160]: I0124 00:45:23.935848 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzm9k\" (UniqueName: \"kubernetes.io/projected/7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9-kube-api-access-qzm9k\") pod \"calico-apiserver-64999767c9-nk8rp\" (UID: \"7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9\") " pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" Jan 24 00:45:23.936073 kubelet[3160]: I0124 00:45:23.935914 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzcwz\" (UniqueName: \"kubernetes.io/projected/f3b0e4f7-4203-4a9e-8024-a24d4365a71d-kube-api-access-zzcwz\") pod \"coredns-66bc5c9577-98wpg\" (UID: \"f3b0e4f7-4203-4a9e-8024-a24d4365a71d\") " pod="kube-system/coredns-66bc5c9577-98wpg" Jan 24 00:45:23.936073 kubelet[3160]: I0124 00:45:23.935945 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/60f29bc1-01eb-4e81-a219-3085d4f87052-goldmane-key-pair\") pod \"goldmane-7c778bb748-wr4rk\" (UID: \"60f29bc1-01eb-4e81-a219-3085d4f87052\") " pod="calico-system/goldmane-7c778bb748-wr4rk" Jan 24 00:45:23.968356 containerd[1714]: time="2026-01-24T00:45:23.964983459Z" level=error msg="Failed to destroy network for sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:23.968356 containerd[1714]: time="2026-01-24T00:45:23.966601694Z" level=error msg="encountered an error cleaning up failed sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:23.968356 containerd[1714]: time="2026-01-24T00:45:23.966667796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msr6b,Uid:6289d75a-fb3d-4a7e-b426-fb74d3f97fd2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:23.972133 kubelet[3160]: E0124 00:45:23.972093 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:23.972259 kubelet[3160]: E0124 00:45:23.972157 3160 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-msr6b" Jan 24 00:45:23.972259 kubelet[3160]: E0124 00:45:23.972181 3160 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-msr6b" Jan 24 00:45:23.972259 kubelet[3160]: E0124 00:45:23.972238 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-msr6b_calico-system(6289d75a-fb3d-4a7e-b426-fb74d3f97fd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-msr6b_calico-system(6289d75a-fb3d-4a7e-b426-fb74d3f97fd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:23.984447 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298-shm.mount: Deactivated successfully. Jan 24 00:45:24.084435 containerd[1714]: time="2026-01-24T00:45:24.084394356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4lbk4,Uid:f6f98da9-ca79-4902-82ee-3f5271b4428b,Namespace:kube-system,Attempt:0,}" Jan 24 00:45:24.101502 containerd[1714]: time="2026-01-24T00:45:24.101463827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5784f66b48-622qn,Uid:1c848a50-4637-48b0-8299-82d8998eb7e8,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:24.128934 containerd[1714]: time="2026-01-24T00:45:24.128868523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5598cf5ccb-2mj7w,Uid:ea8e1ae1-59b4-45f9-9265-2981e79d3abb,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:24.146346 containerd[1714]: time="2026-01-24T00:45:24.145572586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64999767c9-w9j7d,Uid:3b4d50cd-bfa9-4817-b2aa-6df460bb529b,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:45:24.159967 containerd[1714]: time="2026-01-24T00:45:24.159936498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64999767c9-nk8rp,Uid:7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9,Namespace:calico-apiserver,Attempt:0,}" Jan 24 00:45:24.182757 containerd[1714]: time="2026-01-24T00:45:24.182554390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wr4rk,Uid:60f29bc1-01eb-4e81-a219-3085d4f87052,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:24.190712 containerd[1714]: time="2026-01-24T00:45:24.190498563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98wpg,Uid:f3b0e4f7-4203-4a9e-8024-a24d4365a71d,Namespace:kube-system,Attempt:0,}" Jan 24 00:45:24.197598 containerd[1714]: time="2026-01-24T00:45:24.197538416Z" level=error msg="Failed to destroy network for sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.197986 containerd[1714]: time="2026-01-24T00:45:24.197918624Z" level=error msg="encountered an error cleaning up failed sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.198299 containerd[1714]: time="2026-01-24T00:45:24.197998426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4lbk4,Uid:f6f98da9-ca79-4902-82ee-3f5271b4428b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.198471 kubelet[3160]: E0124 00:45:24.198409 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.198549 kubelet[3160]: E0124 00:45:24.198487 3160 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4lbk4" Jan 24 00:45:24.198549 kubelet[3160]: E0124 00:45:24.198514 3160 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-4lbk4" Jan 24 00:45:24.198894 kubelet[3160]: E0124 00:45:24.198583 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-4lbk4_kube-system(f6f98da9-ca79-4902-82ee-3f5271b4428b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-4lbk4_kube-system(f6f98da9-ca79-4902-82ee-3f5271b4428b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4lbk4" podUID="f6f98da9-ca79-4902-82ee-3f5271b4428b" Jan 24 00:45:24.232968 containerd[1714]: time="2026-01-24T00:45:24.232832583Z" level=error msg="Failed to destroy network for sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.233274 containerd[1714]: time="2026-01-24T00:45:24.233236092Z" level=error msg="encountered an error cleaning up failed sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.233411 containerd[1714]: time="2026-01-24T00:45:24.233350494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5784f66b48-622qn,Uid:1c848a50-4637-48b0-8299-82d8998eb7e8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.233944 kubelet[3160]: E0124 00:45:24.233790 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.233944 kubelet[3160]: E0124 00:45:24.233878 3160 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5784f66b48-622qn" Jan 24 00:45:24.233944 kubelet[3160]: E0124 00:45:24.233905 3160 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5784f66b48-622qn" Jan 24 00:45:24.234470 kubelet[3160]: E0124 00:45:24.234099 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5784f66b48-622qn_calico-system(1c848a50-4637-48b0-8299-82d8998eb7e8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5784f66b48-622qn_calico-system(1c848a50-4637-48b0-8299-82d8998eb7e8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5784f66b48-622qn" podUID="1c848a50-4637-48b0-8299-82d8998eb7e8" Jan 24 00:45:24.322697 containerd[1714]: time="2026-01-24T00:45:24.322648536Z" level=error msg="Failed to destroy network for sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.323506 containerd[1714]: time="2026-01-24T00:45:24.323299450Z" level=error msg="encountered an error cleaning up failed sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.323506 containerd[1714]: time="2026-01-24T00:45:24.323377852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5598cf5ccb-2mj7w,Uid:ea8e1ae1-59b4-45f9-9265-2981e79d3abb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.323701 kubelet[3160]: E0124 00:45:24.323585 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.323701 kubelet[3160]: E0124 00:45:24.323646 3160 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" Jan 24 00:45:24.323701 kubelet[3160]: E0124 00:45:24.323669 3160 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" Jan 24 00:45:24.323846 kubelet[3160]: E0124 00:45:24.323734 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5598cf5ccb-2mj7w_calico-system(ea8e1ae1-59b4-45f9-9265-2981e79d3abb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5598cf5ccb-2mj7w_calico-system(ea8e1ae1-59b4-45f9-9265-2981e79d3abb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:45:24.376774 containerd[1714]: time="2026-01-24T00:45:24.376405505Z" level=error msg="Failed to destroy network for sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.378359 containerd[1714]: time="2026-01-24T00:45:24.378262945Z" level=error msg="encountered an error cleaning up failed sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.379501 containerd[1714]: time="2026-01-24T00:45:24.379390470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64999767c9-w9j7d,Uid:3b4d50cd-bfa9-4817-b2aa-6df460bb529b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.379984 kubelet[3160]: E0124 00:45:24.379639 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.379984 kubelet[3160]: E0124 00:45:24.379705 3160 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" Jan 24 00:45:24.379984 kubelet[3160]: E0124 00:45:24.379739 3160 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" Jan 24 00:45:24.380174 kubelet[3160]: E0124 00:45:24.379818 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64999767c9-w9j7d_calico-apiserver(3b4d50cd-bfa9-4817-b2aa-6df460bb529b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64999767c9-w9j7d_calico-apiserver(3b4d50cd-bfa9-4817-b2aa-6df460bb529b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:45:24.415975 kubelet[3160]: I0124 00:45:24.415944 3160 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:24.418753 containerd[1714]: time="2026-01-24T00:45:24.418362817Z" level=info msg="StopPodSandbox for \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\"" Jan 24 00:45:24.419567 containerd[1714]: time="2026-01-24T00:45:24.419393440Z" level=info msg="Ensure that sandbox 47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122 in task-service has been cleanup successfully" Jan 24 00:45:24.421832 kubelet[3160]: I0124 00:45:24.421801 3160 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:24.424261 containerd[1714]: time="2026-01-24T00:45:24.424232345Z" level=info msg="StopPodSandbox for \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\"" Jan 24 00:45:24.424649 containerd[1714]: time="2026-01-24T00:45:24.424603653Z" level=info msg="Ensure that sandbox e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298 in task-service has been cleanup successfully" Jan 24 00:45:24.428455 containerd[1714]: time="2026-01-24T00:45:24.428410436Z" level=error msg="Failed to destroy network for sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.429471 containerd[1714]: time="2026-01-24T00:45:24.429436958Z" level=error msg="Failed to destroy network for sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.431151 kubelet[3160]: I0124 00:45:24.430651 3160 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:24.432547 containerd[1714]: time="2026-01-24T00:45:24.431428401Z" level=error msg="encountered an error cleaning up failed sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.432832 containerd[1714]: time="2026-01-24T00:45:24.432700229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98wpg,Uid:f3b0e4f7-4203-4a9e-8024-a24d4365a71d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.432976 containerd[1714]: time="2026-01-24T00:45:24.431982813Z" level=info msg="StopPodSandbox for \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\"" Jan 24 00:45:24.433339 containerd[1714]: time="2026-01-24T00:45:24.432070815Z" level=error msg="encountered an error cleaning up failed sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.433777 kubelet[3160]: E0124 00:45:24.433621 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.433777 kubelet[3160]: E0124 00:45:24.433664 3160 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-98wpg" Jan 24 00:45:24.433777 kubelet[3160]: E0124 00:45:24.433687 3160 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-98wpg" Jan 24 00:45:24.433933 kubelet[3160]: E0124 00:45:24.433734 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-98wpg_kube-system(f3b0e4f7-4203-4a9e-8024-a24d4365a71d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-98wpg_kube-system(f3b0e4f7-4203-4a9e-8024-a24d4365a71d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-98wpg" podUID="f3b0e4f7-4203-4a9e-8024-a24d4365a71d" Jan 24 00:45:24.434045 containerd[1714]: time="2026-01-24T00:45:24.433791253Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64999767c9-nk8rp,Uid:7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.434705 kubelet[3160]: E0124 00:45:24.434344 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.434705 kubelet[3160]: E0124 00:45:24.434386 3160 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" Jan 24 00:45:24.434705 kubelet[3160]: E0124 00:45:24.434422 3160 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" Jan 24 00:45:24.434865 kubelet[3160]: E0124 00:45:24.434495 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-64999767c9-nk8rp_calico-apiserver(7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-64999767c9-nk8rp_calico-apiserver(7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:45:24.436408 containerd[1714]: time="2026-01-24T00:45:24.435923999Z" level=info msg="Ensure that sandbox f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f in task-service has been cleanup successfully" Jan 24 00:45:24.451855 containerd[1714]: time="2026-01-24T00:45:24.451760643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 24 00:45:24.453284 kubelet[3160]: I0124 00:45:24.453176 3160 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:24.454160 containerd[1714]: time="2026-01-24T00:45:24.453807188Z" level=info msg="StopPodSandbox for \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\"" Jan 24 00:45:24.456297 containerd[1714]: time="2026-01-24T00:45:24.456241041Z" level=info msg="Ensure that sandbox eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef in task-service has been cleanup successfully" Jan 24 00:45:24.459515 kubelet[3160]: I0124 00:45:24.459492 3160 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:24.463355 containerd[1714]: time="2026-01-24T00:45:24.463294094Z" level=error msg="Failed to destroy network for sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.463686 containerd[1714]: time="2026-01-24T00:45:24.463655402Z" level=error msg="encountered an error cleaning up failed sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.463787 containerd[1714]: time="2026-01-24T00:45:24.463748704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wr4rk,Uid:60f29bc1-01eb-4e81-a219-3085d4f87052,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.464410 kubelet[3160]: E0124 00:45:24.464382 3160 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.464509 kubelet[3160]: E0124 00:45:24.464426 3160 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-wr4rk" Jan 24 00:45:24.464509 kubelet[3160]: E0124 00:45:24.464450 3160 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-wr4rk" Jan 24 00:45:24.464601 kubelet[3160]: E0124 00:45:24.464497 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-wr4rk_calico-system(60f29bc1-01eb-4e81-a219-3085d4f87052)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-wr4rk_calico-system(60f29bc1-01eb-4e81-a219-3085d4f87052)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:45:24.468352 containerd[1714]: time="2026-01-24T00:45:24.467205779Z" level=info msg="StopPodSandbox for \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\"" Jan 24 00:45:24.468685 containerd[1714]: time="2026-01-24T00:45:24.468624910Z" level=info msg="Ensure that sandbox 63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63 in task-service has been cleanup successfully" Jan 24 00:45:24.495952 kubelet[3160]: I0124 00:45:24.495898 3160 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:45:24.563222 containerd[1714]: time="2026-01-24T00:45:24.562888160Z" level=error msg="StopPodSandbox for \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\" failed" error="failed to destroy network for sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.564973 kubelet[3160]: E0124 00:45:24.564908 3160 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:24.565155 kubelet[3160]: E0124 00:45:24.564981 3160 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63"} Jan 24 00:45:24.565155 kubelet[3160]: E0124 00:45:24.565044 3160 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ea8e1ae1-59b4-45f9-9265-2981e79d3abb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:45:24.565155 kubelet[3160]: E0124 00:45:24.565079 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ea8e1ae1-59b4-45f9-9265-2981e79d3abb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:45:24.589352 containerd[1714]: time="2026-01-24T00:45:24.587871003Z" level=error msg="StopPodSandbox for \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\" failed" error="failed to destroy network for sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.589352 containerd[1714]: time="2026-01-24T00:45:24.588698921Z" level=error msg="StopPodSandbox for \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\" failed" error="failed to destroy network for sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.589352 containerd[1714]: time="2026-01-24T00:45:24.588745322Z" level=error msg="StopPodSandbox for \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\" failed" error="failed to destroy network for sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.589547 kubelet[3160]: E0124 00:45:24.588110 3160 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:24.589547 kubelet[3160]: E0124 00:45:24.588165 3160 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f"} Jan 24 00:45:24.589547 kubelet[3160]: E0124 00:45:24.588201 3160 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f6f98da9-ca79-4902-82ee-3f5271b4428b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:45:24.589547 kubelet[3160]: E0124 00:45:24.588241 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f6f98da9-ca79-4902-82ee-3f5271b4428b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-4lbk4" podUID="f6f98da9-ca79-4902-82ee-3f5271b4428b" Jan 24 00:45:24.589709 kubelet[3160]: E0124 00:45:24.589064 3160 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:24.589709 kubelet[3160]: E0124 00:45:24.589104 3160 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298"} Jan 24 00:45:24.589709 kubelet[3160]: E0124 00:45:24.589294 3160 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6289d75a-fb3d-4a7e-b426-fb74d3f97fd2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:45:24.589709 kubelet[3160]: E0124 00:45:24.589355 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6289d75a-fb3d-4a7e-b426-fb74d3f97fd2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:24.589854 kubelet[3160]: E0124 00:45:24.589236 3160 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:24.589854 kubelet[3160]: E0124 00:45:24.589396 3160 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122"} Jan 24 00:45:24.589854 kubelet[3160]: E0124 00:45:24.589435 3160 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1c848a50-4637-48b0-8299-82d8998eb7e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:45:24.589854 kubelet[3160]: E0124 00:45:24.589458 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1c848a50-4637-48b0-8299-82d8998eb7e8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5784f66b48-622qn" podUID="1c848a50-4637-48b0-8299-82d8998eb7e8" Jan 24 00:45:24.591768 containerd[1714]: time="2026-01-24T00:45:24.591723387Z" level=error msg="StopPodSandbox for \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\" failed" error="failed to destroy network for sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:24.591932 kubelet[3160]: E0124 00:45:24.591899 3160 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:24.592033 kubelet[3160]: E0124 00:45:24.591937 3160 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef"} Jan 24 00:45:24.592033 kubelet[3160]: E0124 00:45:24.591967 3160 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3b4d50cd-bfa9-4817-b2aa-6df460bb529b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:45:24.592033 kubelet[3160]: E0124 00:45:24.591995 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3b4d50cd-bfa9-4817-b2aa-6df460bb529b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:45:25.462105 kubelet[3160]: I0124 00:45:25.462053 3160 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:25.463311 containerd[1714]: time="2026-01-24T00:45:25.462899229Z" level=info msg="StopPodSandbox for \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\"" Jan 24 00:45:25.463311 containerd[1714]: time="2026-01-24T00:45:25.463104834Z" level=info msg="Ensure that sandbox a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d in task-service has been cleanup successfully" Jan 24 00:45:25.467799 kubelet[3160]: I0124 00:45:25.466175 3160 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:25.467897 containerd[1714]: time="2026-01-24T00:45:25.466907516Z" level=info msg="StopPodSandbox for \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\"" Jan 24 00:45:25.467897 containerd[1714]: time="2026-01-24T00:45:25.467092321Z" level=info msg="Ensure that sandbox 6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd in task-service has been cleanup successfully" Jan 24 00:45:25.470916 kubelet[3160]: I0124 00:45:25.470639 3160 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:25.471473 containerd[1714]: time="2026-01-24T00:45:25.471449015Z" level=info msg="StopPodSandbox for \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\"" Jan 24 00:45:25.471811 containerd[1714]: time="2026-01-24T00:45:25.471784623Z" level=info msg="Ensure that sandbox 057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710 in task-service has been cleanup successfully" Jan 24 00:45:25.516703 containerd[1714]: time="2026-01-24T00:45:25.516650998Z" level=error msg="StopPodSandbox for \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\" failed" error="failed to destroy network for sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:25.517176 kubelet[3160]: E0124 00:45:25.516891 3160 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:25.517176 kubelet[3160]: E0124 00:45:25.516945 3160 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710"} Jan 24 00:45:25.517176 kubelet[3160]: E0124 00:45:25.516986 3160 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f3b0e4f7-4203-4a9e-8024-a24d4365a71d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:45:25.517176 kubelet[3160]: E0124 00:45:25.517021 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f3b0e4f7-4203-4a9e-8024-a24d4365a71d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-98wpg" podUID="f3b0e4f7-4203-4a9e-8024-a24d4365a71d" Jan 24 00:45:25.529899 containerd[1714]: time="2026-01-24T00:45:25.529069868Z" level=error msg="StopPodSandbox for \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\" failed" error="failed to destroy network for sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:25.530014 kubelet[3160]: E0124 00:45:25.529743 3160 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:25.530014 kubelet[3160]: E0124 00:45:25.529789 3160 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd"} Jan 24 00:45:25.530014 kubelet[3160]: E0124 00:45:25.529827 3160 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"60f29bc1-01eb-4e81-a219-3085d4f87052\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:45:25.530014 kubelet[3160]: E0124 00:45:25.529861 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"60f29bc1-01eb-4e81-a219-3085d4f87052\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:45:25.530430 containerd[1714]: time="2026-01-24T00:45:25.530387397Z" level=error msg="StopPodSandbox for \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\" failed" error="failed to destroy network for sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 24 00:45:25.530602 kubelet[3160]: E0124 00:45:25.530570 3160 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:25.530688 kubelet[3160]: E0124 00:45:25.530610 3160 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d"} Jan 24 00:45:25.530688 kubelet[3160]: E0124 00:45:25.530645 3160 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 24 00:45:25.530688 kubelet[3160]: E0124 00:45:25.530674 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:45:32.764314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1315317655.mount: Deactivated successfully. Jan 24 00:45:32.800367 containerd[1714]: time="2026-01-24T00:45:32.800063610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:32.803394 containerd[1714]: time="2026-01-24T00:45:32.803342783Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 24 00:45:32.806260 containerd[1714]: time="2026-01-24T00:45:32.806197447Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:32.811125 containerd[1714]: time="2026-01-24T00:45:32.811074755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 24 00:45:32.811729 containerd[1714]: time="2026-01-24T00:45:32.811694169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 8.359896525s" Jan 24 00:45:32.811805 containerd[1714]: time="2026-01-24T00:45:32.811734070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 24 00:45:32.831763 containerd[1714]: time="2026-01-24T00:45:32.831731815Z" level=info msg="CreateContainer within sandbox \"7a324d3a1bee2d96a445233be6acea6c74199856ab951cab8cffd8ed6a52fff9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 24 00:45:32.869763 containerd[1714]: time="2026-01-24T00:45:32.869725061Z" level=info msg="CreateContainer within sandbox \"7a324d3a1bee2d96a445233be6acea6c74199856ab951cab8cffd8ed6a52fff9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0bcebb107ea8acf2c51aaf5d3134ad4b0e00c7935ae28ff2b7ca7e4ed8ddedcf\"" Jan 24 00:45:32.871406 containerd[1714]: time="2026-01-24T00:45:32.871370698Z" level=info msg="StartContainer for \"0bcebb107ea8acf2c51aaf5d3134ad4b0e00c7935ae28ff2b7ca7e4ed8ddedcf\"" Jan 24 00:45:32.904512 systemd[1]: Started cri-containerd-0bcebb107ea8acf2c51aaf5d3134ad4b0e00c7935ae28ff2b7ca7e4ed8ddedcf.scope - libcontainer container 0bcebb107ea8acf2c51aaf5d3134ad4b0e00c7935ae28ff2b7ca7e4ed8ddedcf. Jan 24 00:45:32.940220 containerd[1714]: time="2026-01-24T00:45:32.939646518Z" level=info msg="StartContainer for \"0bcebb107ea8acf2c51aaf5d3134ad4b0e00c7935ae28ff2b7ca7e4ed8ddedcf\" returns successfully" Jan 24 00:45:33.330037 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 24 00:45:33.330172 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 24 00:45:33.435010 containerd[1714]: time="2026-01-24T00:45:33.434843345Z" level=info msg="StopPodSandbox for \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\"" Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.529 [INFO][4362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.529 [INFO][4362] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" iface="eth0" netns="/var/run/netns/cni-6d18ff4b-b021-75f9-f911-c26a43adad2d" Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.530 [INFO][4362] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" iface="eth0" netns="/var/run/netns/cni-6d18ff4b-b021-75f9-f911-c26a43adad2d" Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.530 [INFO][4362] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" iface="eth0" netns="/var/run/netns/cni-6d18ff4b-b021-75f9-f911-c26a43adad2d" Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.530 [INFO][4362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.530 [INFO][4362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.582 [INFO][4373] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" HandleID="k8s-pod-network.47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--5784f66b48--622qn-eth0" Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.583 [INFO][4373] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.583 [INFO][4373] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.588 [WARNING][4373] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" HandleID="k8s-pod-network.47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--5784f66b48--622qn-eth0" Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.588 [INFO][4373] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" HandleID="k8s-pod-network.47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--5784f66b48--622qn-eth0" Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.590 [INFO][4373] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:33.596205 containerd[1714]: 2026-01-24 00:45:33.593 [INFO][4362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:33.596205 containerd[1714]: time="2026-01-24T00:45:33.596153337Z" level=info msg="TearDown network for sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\" successfully" Jan 24 00:45:33.596205 containerd[1714]: time="2026-01-24T00:45:33.596199438Z" level=info msg="StopPodSandbox for \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\" returns successfully" Jan 24 00:45:33.715275 kubelet[3160]: I0124 00:45:33.714837 3160 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59hnd\" (UniqueName: \"kubernetes.io/projected/1c848a50-4637-48b0-8299-82d8998eb7e8-kube-api-access-59hnd\") pod \"1c848a50-4637-48b0-8299-82d8998eb7e8\" (UID: \"1c848a50-4637-48b0-8299-82d8998eb7e8\") " Jan 24 00:45:33.715275 kubelet[3160]: I0124 00:45:33.714890 3160 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c848a50-4637-48b0-8299-82d8998eb7e8-whisker-ca-bundle\") pod \"1c848a50-4637-48b0-8299-82d8998eb7e8\" (UID: \"1c848a50-4637-48b0-8299-82d8998eb7e8\") " Jan 24 00:45:33.715275 kubelet[3160]: I0124 00:45:33.714945 3160 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c848a50-4637-48b0-8299-82d8998eb7e8-whisker-backend-key-pair\") pod \"1c848a50-4637-48b0-8299-82d8998eb7e8\" (UID: \"1c848a50-4637-48b0-8299-82d8998eb7e8\") " Jan 24 00:45:33.719054 kubelet[3160]: I0124 00:45:33.718772 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c848a50-4637-48b0-8299-82d8998eb7e8-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1c848a50-4637-48b0-8299-82d8998eb7e8" (UID: "1c848a50-4637-48b0-8299-82d8998eb7e8"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 24 00:45:33.719054 kubelet[3160]: I0124 00:45:33.718979 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1c848a50-4637-48b0-8299-82d8998eb7e8-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1c848a50-4637-48b0-8299-82d8998eb7e8" (UID: "1c848a50-4637-48b0-8299-82d8998eb7e8"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 24 00:45:33.719553 kubelet[3160]: I0124 00:45:33.719519 3160 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c848a50-4637-48b0-8299-82d8998eb7e8-kube-api-access-59hnd" (OuterVolumeSpecName: "kube-api-access-59hnd") pod "1c848a50-4637-48b0-8299-82d8998eb7e8" (UID: "1c848a50-4637-48b0-8299-82d8998eb7e8"). InnerVolumeSpecName "kube-api-access-59hnd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 24 00:45:33.764869 systemd[1]: run-netns-cni\x2d6d18ff4b\x2db021\x2d75f9\x2df911\x2dc26a43adad2d.mount: Deactivated successfully. Jan 24 00:45:33.764996 systemd[1]: var-lib-kubelet-pods-1c848a50\x2d4637\x2d48b0\x2d8299\x2d82d8998eb7e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d59hnd.mount: Deactivated successfully. Jan 24 00:45:33.765100 systemd[1]: var-lib-kubelet-pods-1c848a50\x2d4637\x2d48b0\x2d8299\x2d82d8998eb7e8-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 24 00:45:33.816172 kubelet[3160]: I0124 00:45:33.816119 3160 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1c848a50-4637-48b0-8299-82d8998eb7e8-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-e69c55f9b7\" DevicePath \"\"" Jan 24 00:45:33.816172 kubelet[3160]: I0124 00:45:33.816159 3160 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-59hnd\" (UniqueName: \"kubernetes.io/projected/1c848a50-4637-48b0-8299-82d8998eb7e8-kube-api-access-59hnd\") on node \"ci-4081.3.6-n-e69c55f9b7\" DevicePath \"\"" Jan 24 00:45:33.816172 kubelet[3160]: I0124 00:45:33.816175 3160 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1c848a50-4637-48b0-8299-82d8998eb7e8-whisker-ca-bundle\") on node \"ci-4081.3.6-n-e69c55f9b7\" DevicePath \"\"" Jan 24 00:45:34.304546 systemd[1]: Removed slice kubepods-besteffort-pod1c848a50_4637_48b0_8299_82d8998eb7e8.slice - libcontainer container kubepods-besteffort-pod1c848a50_4637_48b0_8299_82d8998eb7e8.slice. Jan 24 00:45:34.527664 kubelet[3160]: I0124 00:45:34.526567 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bmptt" podStartSLOduration=2.394045221 podStartE2EDuration="24.526546754s" podCreationTimestamp="2026-01-24 00:45:10 +0000 UTC" firstStartedPulling="2026-01-24 00:45:10.680066256 +0000 UTC m=+22.981184452" lastFinishedPulling="2026-01-24 00:45:32.812567789 +0000 UTC m=+45.113685985" observedRunningTime="2026-01-24 00:45:33.542651646 +0000 UTC m=+45.843769842" watchObservedRunningTime="2026-01-24 00:45:34.526546754 +0000 UTC m=+46.827664950" Jan 24 00:45:34.631089 systemd[1]: Created slice kubepods-besteffort-pod0dadebec_93b1_44bd_9cc0_05be5a1a434d.slice - libcontainer container kubepods-besteffort-pod0dadebec_93b1_44bd_9cc0_05be5a1a434d.slice. Jan 24 00:45:34.633420 kubelet[3160]: I0124 00:45:34.632617 3160 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 24 00:45:34.677080 systemd[1]: run-containerd-runc-k8s.io-0bcebb107ea8acf2c51aaf5d3134ad4b0e00c7935ae28ff2b7ca7e4ed8ddedcf-runc.dZNKZf.mount: Deactivated successfully. Jan 24 00:45:34.723348 kubelet[3160]: I0124 00:45:34.721556 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rr8mp\" (UniqueName: \"kubernetes.io/projected/0dadebec-93b1-44bd-9cc0-05be5a1a434d-kube-api-access-rr8mp\") pod \"whisker-65cb7dc6d6-nfm24\" (UID: \"0dadebec-93b1-44bd-9cc0-05be5a1a434d\") " pod="calico-system/whisker-65cb7dc6d6-nfm24" Jan 24 00:45:34.723348 kubelet[3160]: I0124 00:45:34.721623 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0dadebec-93b1-44bd-9cc0-05be5a1a434d-whisker-backend-key-pair\") pod \"whisker-65cb7dc6d6-nfm24\" (UID: \"0dadebec-93b1-44bd-9cc0-05be5a1a434d\") " pod="calico-system/whisker-65cb7dc6d6-nfm24" Jan 24 00:45:34.723348 kubelet[3160]: I0124 00:45:34.721658 3160 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dadebec-93b1-44bd-9cc0-05be5a1a434d-whisker-ca-bundle\") pod \"whisker-65cb7dc6d6-nfm24\" (UID: \"0dadebec-93b1-44bd-9cc0-05be5a1a434d\") " pod="calico-system/whisker-65cb7dc6d6-nfm24" Jan 24 00:45:34.944627 containerd[1714]: time="2026-01-24T00:45:34.944140853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65cb7dc6d6-nfm24,Uid:0dadebec-93b1-44bd-9cc0-05be5a1a434d,Namespace:calico-system,Attempt:0,}" Jan 24 00:45:35.205221 systemd-networkd[1356]: cali24046bf4050: Link UP Jan 24 00:45:35.207571 systemd-networkd[1356]: cali24046bf4050: Gained carrier Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.032 [INFO][4503] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.057 [INFO][4503] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0 whisker-65cb7dc6d6- calico-system 0dadebec-93b1-44bd-9cc0-05be5a1a434d 903 0 2026-01-24 00:45:34 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:65cb7dc6d6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-e69c55f9b7 whisker-65cb7dc6d6-nfm24 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali24046bf4050 [] [] }} ContainerID="426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" Namespace="calico-system" Pod="whisker-65cb7dc6d6-nfm24" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.057 [INFO][4503] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" Namespace="calico-system" Pod="whisker-65cb7dc6d6-nfm24" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.115 [INFO][4517] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" HandleID="k8s-pod-network.426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.116 [INFO][4517] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" HandleID="k8s-pod-network.426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f100), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e69c55f9b7", "pod":"whisker-65cb7dc6d6-nfm24", "timestamp":"2026-01-24 00:45:35.115710874 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e69c55f9b7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.116 [INFO][4517] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.116 [INFO][4517] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.116 [INFO][4517] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e69c55f9b7' Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.125 [INFO][4517] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.135 [INFO][4517] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.143 [INFO][4517] ipam/ipam.go 511: Trying affinity for 192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.145 [INFO][4517] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.148 [INFO][4517] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.148 [INFO][4517] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.152 [INFO][4517] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7 Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.162 [INFO][4517] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.168 [INFO][4517] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.65/26] block=192.168.76.64/26 handle="k8s-pod-network.426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.169 [INFO][4517] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.65/26] handle="k8s-pod-network.426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.169 [INFO][4517] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:35.229541 containerd[1714]: 2026-01-24 00:45:35.169 [INFO][4517] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.65/26] IPv6=[] ContainerID="426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" HandleID="k8s-pod-network.426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0" Jan 24 00:45:35.232724 containerd[1714]: 2026-01-24 00:45:35.175 [INFO][4503] cni-plugin/k8s.go 418: Populated endpoint ContainerID="426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" Namespace="calico-system" Pod="whisker-65cb7dc6d6-nfm24" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0", GenerateName:"whisker-65cb7dc6d6-", Namespace:"calico-system", SelfLink:"", UID:"0dadebec-93b1-44bd-9cc0-05be5a1a434d", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65cb7dc6d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"", Pod:"whisker-65cb7dc6d6-nfm24", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali24046bf4050", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:35.232724 containerd[1714]: 2026-01-24 00:45:35.175 [INFO][4503] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.65/32] ContainerID="426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" Namespace="calico-system" Pod="whisker-65cb7dc6d6-nfm24" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0" Jan 24 00:45:35.232724 containerd[1714]: 2026-01-24 00:45:35.175 [INFO][4503] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali24046bf4050 ContainerID="426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" Namespace="calico-system" Pod="whisker-65cb7dc6d6-nfm24" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0" Jan 24 00:45:35.232724 containerd[1714]: 2026-01-24 00:45:35.204 [INFO][4503] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" Namespace="calico-system" Pod="whisker-65cb7dc6d6-nfm24" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0" Jan 24 00:45:35.232724 containerd[1714]: 2026-01-24 00:45:35.205 [INFO][4503] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" Namespace="calico-system" Pod="whisker-65cb7dc6d6-nfm24" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0", GenerateName:"whisker-65cb7dc6d6-", Namespace:"calico-system", SelfLink:"", UID:"0dadebec-93b1-44bd-9cc0-05be5a1a434d", ResourceVersion:"903", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"65cb7dc6d6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7", Pod:"whisker-65cb7dc6d6-nfm24", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.76.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali24046bf4050", MAC:"86:70:b4:b1:31:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:35.232724 containerd[1714]: 2026-01-24 00:45:35.220 [INFO][4503] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7" Namespace="calico-system" Pod="whisker-65cb7dc6d6-nfm24" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--65cb7dc6d6--nfm24-eth0" Jan 24 00:45:35.279269 containerd[1714]: time="2026-01-24T00:45:35.277709781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:35.279269 containerd[1714]: time="2026-01-24T00:45:35.277891585Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:35.279269 containerd[1714]: time="2026-01-24T00:45:35.278021888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:35.279269 containerd[1714]: time="2026-01-24T00:45:35.278677002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:35.311512 systemd[1]: Started cri-containerd-426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7.scope - libcontainer container 426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7. Jan 24 00:45:35.390852 containerd[1714]: time="2026-01-24T00:45:35.390765498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65cb7dc6d6-nfm24,Uid:0dadebec-93b1-44bd-9cc0-05be5a1a434d,Namespace:calico-system,Attempt:0,} returns sandbox id \"426ed365426f9b908496e91b89f72fd195062c961290eed6aad72b2095f765d7\"" Jan 24 00:45:35.394929 containerd[1714]: time="2026-01-24T00:45:35.393428558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:45:35.418361 kernel: bpftool[4626]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 24 00:45:35.690118 containerd[1714]: time="2026-01-24T00:45:35.690066363Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:35.692989 containerd[1714]: time="2026-01-24T00:45:35.692776623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:45:35.692989 containerd[1714]: time="2026-01-24T00:45:35.692807524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:45:35.693166 kubelet[3160]: E0124 00:45:35.693068 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:35.693166 kubelet[3160]: E0124 00:45:35.693123 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:35.693601 kubelet[3160]: E0124 00:45:35.693533 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-65cb7dc6d6-nfm24_calico-system(0dadebec-93b1-44bd-9cc0-05be5a1a434d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:35.695260 containerd[1714]: time="2026-01-24T00:45:35.694575163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:45:35.790704 systemd-networkd[1356]: vxlan.calico: Link UP Jan 24 00:45:35.790716 systemd-networkd[1356]: vxlan.calico: Gained carrier Jan 24 00:45:35.966455 containerd[1714]: time="2026-01-24T00:45:35.966053509Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:35.968750 containerd[1714]: time="2026-01-24T00:45:35.968680667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:45:35.969491 containerd[1714]: time="2026-01-24T00:45:35.968785969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:45:35.969592 kubelet[3160]: E0124 00:45:35.968971 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:35.969592 kubelet[3160]: E0124 00:45:35.969023 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:35.969592 kubelet[3160]: E0124 00:45:35.969116 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-65cb7dc6d6-nfm24_calico-system(0dadebec-93b1-44bd-9cc0-05be5a1a434d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:35.969992 kubelet[3160]: E0124 00:45:35.969167 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:45:36.298098 containerd[1714]: time="2026-01-24T00:45:36.297985200Z" level=info msg="StopPodSandbox for \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\"" Jan 24 00:45:36.302249 containerd[1714]: time="2026-01-24T00:45:36.301837886Z" level=info msg="StopPodSandbox for \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\"" Jan 24 00:45:36.302768 containerd[1714]: time="2026-01-24T00:45:36.302738306Z" level=info msg="StopPodSandbox for \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\"" Jan 24 00:45:36.314368 kubelet[3160]: I0124 00:45:36.314271 3160 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c848a50-4637-48b0-8299-82d8998eb7e8" path="/var/lib/kubelet/pods/1c848a50-4637-48b0-8299-82d8998eb7e8/volumes" Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.422 [INFO][4726] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.422 [INFO][4726] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" iface="eth0" netns="/var/run/netns/cni-c823cc23-7e3c-b4fc-7cf1-2b5775bb4bb1" Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.423 [INFO][4726] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" iface="eth0" netns="/var/run/netns/cni-c823cc23-7e3c-b4fc-7cf1-2b5775bb4bb1" Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.423 [INFO][4726] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" iface="eth0" netns="/var/run/netns/cni-c823cc23-7e3c-b4fc-7cf1-2b5775bb4bb1" Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.423 [INFO][4726] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.423 [INFO][4726] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.464 [INFO][4751] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" HandleID="k8s-pod-network.eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.465 [INFO][4751] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.465 [INFO][4751] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.473 [WARNING][4751] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" HandleID="k8s-pod-network.eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.473 [INFO][4751] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" HandleID="k8s-pod-network.eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.475 [INFO][4751] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:36.481352 containerd[1714]: 2026-01-24 00:45:36.476 [INFO][4726] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:36.484852 containerd[1714]: time="2026-01-24T00:45:36.484404951Z" level=info msg="TearDown network for sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\" successfully" Jan 24 00:45:36.484852 containerd[1714]: time="2026-01-24T00:45:36.484447252Z" level=info msg="StopPodSandbox for \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\" returns successfully" Jan 24 00:45:36.485657 systemd[1]: run-netns-cni\x2dc823cc23\x2d7e3c\x2db4fc\x2d7cf1\x2d2b5775bb4bb1.mount: Deactivated successfully. Jan 24 00:45:36.495435 containerd[1714]: time="2026-01-24T00:45:36.495383195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64999767c9-w9j7d,Uid:3b4d50cd-bfa9-4817-b2aa-6df460bb529b,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.406 [INFO][4730] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.406 [INFO][4730] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" iface="eth0" netns="/var/run/netns/cni-ea9b10e1-edb5-22cb-70b5-ac6ae1f8b8b3" Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.408 [INFO][4730] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" iface="eth0" netns="/var/run/netns/cni-ea9b10e1-edb5-22cb-70b5-ac6ae1f8b8b3" Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.411 [INFO][4730] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" iface="eth0" netns="/var/run/netns/cni-ea9b10e1-edb5-22cb-70b5-ac6ae1f8b8b3" Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.411 [INFO][4730] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.412 [INFO][4730] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.490 [INFO][4745] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" HandleID="k8s-pod-network.057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.491 [INFO][4745] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.491 [INFO][4745] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.504 [WARNING][4745] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" HandleID="k8s-pod-network.057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.504 [INFO][4745] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" HandleID="k8s-pod-network.057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.506 [INFO][4745] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:36.516564 containerd[1714]: 2026-01-24 00:45:36.511 [INFO][4730] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:36.517540 kubelet[3160]: E0124 00:45:36.516833 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:45:36.519703 containerd[1714]: time="2026-01-24T00:45:36.519665536Z" level=info msg="TearDown network for sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\" successfully" Jan 24 00:45:36.521024 containerd[1714]: time="2026-01-24T00:45:36.519702537Z" level=info msg="StopPodSandbox for \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\" returns successfully" Jan 24 00:45:36.527430 systemd[1]: run-netns-cni\x2dea9b10e1\x2dedb5\x2d22cb\x2d70b5\x2dac6ae1f8b8b3.mount: Deactivated successfully. Jan 24 00:45:36.530472 containerd[1714]: time="2026-01-24T00:45:36.527885119Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98wpg,Uid:f3b0e4f7-4203-4a9e-8024-a24d4365a71d,Namespace:kube-system,Attempt:1,}" Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.432 [INFO][4721] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.432 [INFO][4721] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" iface="eth0" netns="/var/run/netns/cni-74140b6a-c86d-dc8f-d269-a9ef73b26d9a" Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.432 [INFO][4721] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" iface="eth0" netns="/var/run/netns/cni-74140b6a-c86d-dc8f-d269-a9ef73b26d9a" Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.435 [INFO][4721] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" iface="eth0" netns="/var/run/netns/cni-74140b6a-c86d-dc8f-d269-a9ef73b26d9a" Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.435 [INFO][4721] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.435 [INFO][4721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.498 [INFO][4756] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" HandleID="k8s-pod-network.6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.498 [INFO][4756] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.506 [INFO][4756] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.517 [WARNING][4756] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" HandleID="k8s-pod-network.6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.517 [INFO][4756] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" HandleID="k8s-pod-network.6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.523 [INFO][4756] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:36.532472 containerd[1714]: 2026-01-24 00:45:36.527 [INFO][4721] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:36.536367 containerd[1714]: time="2026-01-24T00:45:36.533402142Z" level=info msg="TearDown network for sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\" successfully" Jan 24 00:45:36.536367 containerd[1714]: time="2026-01-24T00:45:36.533432343Z" level=info msg="StopPodSandbox for \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\" returns successfully" Jan 24 00:45:36.537555 systemd[1]: run-netns-cni\x2d74140b6a\x2dc86d\x2ddc8f\x2dd269\x2da9ef73b26d9a.mount: Deactivated successfully. Jan 24 00:45:36.542069 containerd[1714]: time="2026-01-24T00:45:36.542006534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wr4rk,Uid:60f29bc1-01eb-4e81-a219-3085d4f87052,Namespace:calico-system,Attempt:1,}" Jan 24 00:45:36.711499 systemd-networkd[1356]: cali24046bf4050: Gained IPv6LL Jan 24 00:45:36.742270 systemd-networkd[1356]: cali275601b9b8f: Link UP Jan 24 00:45:36.742823 systemd-networkd[1356]: cali275601b9b8f: Gained carrier Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.598 [INFO][4768] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0 calico-apiserver-64999767c9- calico-apiserver 3b4d50cd-bfa9-4817-b2aa-6df460bb529b 925 0 2026-01-24 00:45:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64999767c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-e69c55f9b7 calico-apiserver-64999767c9-w9j7d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali275601b9b8f [] [] }} ContainerID="2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-w9j7d" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.598 [INFO][4768] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-w9j7d" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.674 [INFO][4793] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" HandleID="k8s-pod-network.2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.675 [INFO][4793] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" HandleID="k8s-pod-network.2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-e69c55f9b7", "pod":"calico-apiserver-64999767c9-w9j7d", "timestamp":"2026-01-24 00:45:36.674519684 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e69c55f9b7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.675 [INFO][4793] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.676 [INFO][4793] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.676 [INFO][4793] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e69c55f9b7' Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.689 [INFO][4793] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.697 [INFO][4793] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.703 [INFO][4793] ipam/ipam.go 511: Trying affinity for 192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.705 [INFO][4793] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.707 [INFO][4793] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.707 [INFO][4793] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.709 [INFO][4793] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4 Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.727 [INFO][4793] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.737 [INFO][4793] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.66/26] block=192.168.76.64/26 handle="k8s-pod-network.2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.737 [INFO][4793] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.66/26] handle="k8s-pod-network.2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.737 [INFO][4793] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:36.769900 containerd[1714]: 2026-01-24 00:45:36.737 [INFO][4793] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.66/26] IPv6=[] ContainerID="2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" HandleID="k8s-pod-network.2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:36.770901 containerd[1714]: 2026-01-24 00:45:36.739 [INFO][4768] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-w9j7d" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0", GenerateName:"calico-apiserver-64999767c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b4d50cd-bfa9-4817-b2aa-6df460bb529b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64999767c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"", Pod:"calico-apiserver-64999767c9-w9j7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali275601b9b8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:36.770901 containerd[1714]: 2026-01-24 00:45:36.739 [INFO][4768] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.66/32] ContainerID="2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-w9j7d" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:36.770901 containerd[1714]: 2026-01-24 00:45:36.739 [INFO][4768] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali275601b9b8f ContainerID="2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-w9j7d" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:36.770901 containerd[1714]: 2026-01-24 00:45:36.743 [INFO][4768] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-w9j7d" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:36.770901 containerd[1714]: 2026-01-24 00:45:36.743 [INFO][4768] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-w9j7d" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0", GenerateName:"calico-apiserver-64999767c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b4d50cd-bfa9-4817-b2aa-6df460bb529b", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64999767c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4", Pod:"calico-apiserver-64999767c9-w9j7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali275601b9b8f", MAC:"fa:ce:5c:6e:c7:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:36.770901 containerd[1714]: 2026-01-24 00:45:36.766 [INFO][4768] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-w9j7d" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:36.805723 containerd[1714]: time="2026-01-24T00:45:36.805604703Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:36.805723 containerd[1714]: time="2026-01-24T00:45:36.805671405Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:36.805723 containerd[1714]: time="2026-01-24T00:45:36.805686905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:36.806565 containerd[1714]: time="2026-01-24T00:45:36.805767707Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:36.830540 systemd[1]: Started cri-containerd-2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4.scope - libcontainer container 2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4. Jan 24 00:45:36.871702 systemd-networkd[1356]: calic48e80977ca: Link UP Jan 24 00:45:36.873103 systemd-networkd[1356]: calic48e80977ca: Gained carrier Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.646 [INFO][4781] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0 coredns-66bc5c9577- kube-system f3b0e4f7-4203-4a9e-8024-a24d4365a71d 924 0 2026-01-24 00:44:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-e69c55f9b7 coredns-66bc5c9577-98wpg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic48e80977ca [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" Namespace="kube-system" Pod="coredns-66bc5c9577-98wpg" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.646 [INFO][4781] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" Namespace="kube-system" Pod="coredns-66bc5c9577-98wpg" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.694 [INFO][4811] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" HandleID="k8s-pod-network.da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.694 [INFO][4811] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" HandleID="k8s-pod-network.da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4fe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-e69c55f9b7", "pod":"coredns-66bc5c9577-98wpg", "timestamp":"2026-01-24 00:45:36.694628432 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e69c55f9b7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.694 [INFO][4811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.737 [INFO][4811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.737 [INFO][4811] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e69c55f9b7' Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.792 [INFO][4811] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.799 [INFO][4811] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.806 [INFO][4811] ipam/ipam.go 511: Trying affinity for 192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.809 [INFO][4811] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.812 [INFO][4811] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.812 [INFO][4811] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.815 [INFO][4811] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.824 [INFO][4811] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.837 [INFO][4811] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.67/26] block=192.168.76.64/26 handle="k8s-pod-network.da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.837 [INFO][4811] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.67/26] handle="k8s-pod-network.da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.837 [INFO][4811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:36.894133 containerd[1714]: 2026-01-24 00:45:36.837 [INFO][4811] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.67/26] IPv6=[] ContainerID="da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" HandleID="k8s-pod-network.da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:36.896107 containerd[1714]: 2026-01-24 00:45:36.842 [INFO][4781] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" Namespace="kube-system" Pod="coredns-66bc5c9577-98wpg" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f3b0e4f7-4203-4a9e-8024-a24d4365a71d", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"", Pod:"coredns-66bc5c9577-98wpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic48e80977ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:36.896107 containerd[1714]: 2026-01-24 00:45:36.842 [INFO][4781] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.67/32] ContainerID="da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" Namespace="kube-system" Pod="coredns-66bc5c9577-98wpg" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:36.896107 containerd[1714]: 2026-01-24 00:45:36.842 [INFO][4781] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic48e80977ca ContainerID="da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" Namespace="kube-system" Pod="coredns-66bc5c9577-98wpg" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:36.896107 containerd[1714]: 2026-01-24 00:45:36.874 [INFO][4781] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" Namespace="kube-system" Pod="coredns-66bc5c9577-98wpg" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:36.896107 containerd[1714]: 2026-01-24 00:45:36.874 [INFO][4781] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" Namespace="kube-system" Pod="coredns-66bc5c9577-98wpg" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f3b0e4f7-4203-4a9e-8024-a24d4365a71d", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e", Pod:"coredns-66bc5c9577-98wpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic48e80977ca", MAC:"ae:61:e1:59:94:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:36.897738 containerd[1714]: 2026-01-24 00:45:36.890 [INFO][4781] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e" Namespace="kube-system" Pod="coredns-66bc5c9577-98wpg" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:36.937045 containerd[1714]: time="2026-01-24T00:45:36.936819925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64999767c9-w9j7d,Uid:3b4d50cd-bfa9-4817-b2aa-6df460bb529b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4\"" Jan 24 00:45:36.941177 containerd[1714]: time="2026-01-24T00:45:36.941143021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:36.949570 containerd[1714]: time="2026-01-24T00:45:36.949351504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:36.949999 containerd[1714]: time="2026-01-24T00:45:36.949412605Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:36.949999 containerd[1714]: time="2026-01-24T00:45:36.949450606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:36.949999 containerd[1714]: time="2026-01-24T00:45:36.949639911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:36.970127 systemd-networkd[1356]: cali4fe16ff7b82: Link UP Jan 24 00:45:36.970392 systemd-networkd[1356]: cali4fe16ff7b82: Gained carrier Jan 24 00:45:37.008192 systemd[1]: run-containerd-runc-k8s.io-da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e-runc.2kC9Kh.mount: Deactivated successfully. Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.689 [INFO][4790] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0 goldmane-7c778bb748- calico-system 60f29bc1-01eb-4e81-a219-3085d4f87052 926 0 2026-01-24 00:45:07 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-e69c55f9b7 goldmane-7c778bb748-wr4rk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4fe16ff7b82 [] [] }} ContainerID="7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" Namespace="calico-system" Pod="goldmane-7c778bb748-wr4rk" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.689 [INFO][4790] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" Namespace="calico-system" Pod="goldmane-7c778bb748-wr4rk" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.723 [INFO][4822] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" HandleID="k8s-pod-network.7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.724 [INFO][4822] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" HandleID="k8s-pod-network.7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5880), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e69c55f9b7", "pod":"goldmane-7c778bb748-wr4rk", "timestamp":"2026-01-24 00:45:36.723990286 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e69c55f9b7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.724 [INFO][4822] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.848 [INFO][4822] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.848 [INFO][4822] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e69c55f9b7' Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.893 [INFO][4822] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.904 [INFO][4822] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.911 [INFO][4822] ipam/ipam.go 511: Trying affinity for 192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.913 [INFO][4822] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.919 [INFO][4822] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.919 [INFO][4822] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.922 [INFO][4822] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.930 [INFO][4822] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.944 [INFO][4822] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.68/26] block=192.168.76.64/26 handle="k8s-pod-network.7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.945 [INFO][4822] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.68/26] handle="k8s-pod-network.7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.946 [INFO][4822] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:37.013810 containerd[1714]: 2026-01-24 00:45:36.946 [INFO][4822] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.68/26] IPv6=[] ContainerID="7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" HandleID="k8s-pod-network.7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:37.016738 containerd[1714]: 2026-01-24 00:45:36.961 [INFO][4790] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" Namespace="calico-system" Pod="goldmane-7c778bb748-wr4rk" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"60f29bc1-01eb-4e81-a219-3085d4f87052", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"", Pod:"goldmane-7c778bb748-wr4rk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fe16ff7b82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:37.016738 containerd[1714]: 2026-01-24 00:45:36.963 [INFO][4790] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.68/32] ContainerID="7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" Namespace="calico-system" Pod="goldmane-7c778bb748-wr4rk" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:37.016738 containerd[1714]: 2026-01-24 00:45:36.963 [INFO][4790] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4fe16ff7b82 ContainerID="7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" Namespace="calico-system" Pod="goldmane-7c778bb748-wr4rk" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:37.016738 containerd[1714]: 2026-01-24 00:45:36.970 [INFO][4790] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" Namespace="calico-system" Pod="goldmane-7c778bb748-wr4rk" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:37.016738 containerd[1714]: 2026-01-24 00:45:36.978 [INFO][4790] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" Namespace="calico-system" Pod="goldmane-7c778bb748-wr4rk" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"60f29bc1-01eb-4e81-a219-3085d4f87052", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b", Pod:"goldmane-7c778bb748-wr4rk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fe16ff7b82", MAC:"aa:f1:ae:be:1e:0c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:37.016738 containerd[1714]: 2026-01-24 00:45:37.006 [INFO][4790] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b" Namespace="calico-system" Pod="goldmane-7c778bb748-wr4rk" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:37.022589 systemd[1]: Started cri-containerd-da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e.scope - libcontainer container da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e. Jan 24 00:45:37.065369 containerd[1714]: time="2026-01-24T00:45:37.062839031Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:37.067407 containerd[1714]: time="2026-01-24T00:45:37.065343987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:37.067407 containerd[1714]: time="2026-01-24T00:45:37.065369188Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:37.067407 containerd[1714]: time="2026-01-24T00:45:37.065513291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:37.101716 containerd[1714]: time="2026-01-24T00:45:37.101685896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-98wpg,Uid:f3b0e4f7-4203-4a9e-8024-a24d4365a71d,Namespace:kube-system,Attempt:1,} returns sandbox id \"da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e\"" Jan 24 00:45:37.114348 containerd[1714]: time="2026-01-24T00:45:37.114310677Z" level=info msg="CreateContainer within sandbox \"da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:45:37.122929 systemd[1]: Started cri-containerd-7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b.scope - libcontainer container 7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b. Jan 24 00:45:37.165681 containerd[1714]: time="2026-01-24T00:45:37.165620320Z" level=info msg="CreateContainer within sandbox \"da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e96ac14e28c5399b8fd025d5bf656f7cc56457df635a110cfd5b0046ff12eed8\"" Jan 24 00:45:37.168830 containerd[1714]: time="2026-01-24T00:45:37.168458083Z" level=info msg="StartContainer for \"e96ac14e28c5399b8fd025d5bf656f7cc56457df635a110cfd5b0046ff12eed8\"" Jan 24 00:45:37.204917 systemd[1]: Started cri-containerd-e96ac14e28c5399b8fd025d5bf656f7cc56457df635a110cfd5b0046ff12eed8.scope - libcontainer container e96ac14e28c5399b8fd025d5bf656f7cc56457df635a110cfd5b0046ff12eed8. Jan 24 00:45:37.234313 containerd[1714]: time="2026-01-24T00:45:37.233961842Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:37.237918 containerd[1714]: time="2026-01-24T00:45:37.237870429Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:37.238834 containerd[1714]: time="2026-01-24T00:45:37.238040833Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:45:37.238938 kubelet[3160]: E0124 00:45:37.238319 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:37.238938 kubelet[3160]: E0124 00:45:37.238394 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:37.238938 kubelet[3160]: E0124 00:45:37.238492 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64999767c9-w9j7d_calico-apiserver(3b4d50cd-bfa9-4817-b2aa-6df460bb529b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:37.238938 kubelet[3160]: E0124 00:45:37.238537 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:45:37.240654 containerd[1714]: time="2026-01-24T00:45:37.240622790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-wr4rk,Uid:60f29bc1-01eb-4e81-a219-3085d4f87052,Namespace:calico-system,Attempt:1,} returns sandbox id \"7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b\"" Jan 24 00:45:37.246483 containerd[1714]: time="2026-01-24T00:45:37.244978487Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:45:37.273927 containerd[1714]: time="2026-01-24T00:45:37.273887331Z" level=info msg="StartContainer for \"e96ac14e28c5399b8fd025d5bf656f7cc56457df635a110cfd5b0046ff12eed8\" returns successfully" Jan 24 00:45:37.296864 containerd[1714]: time="2026-01-24T00:45:37.296397832Z" level=info msg="StopPodSandbox for \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\"" Jan 24 00:45:37.296864 containerd[1714]: time="2026-01-24T00:45:37.296439933Z" level=info msg="StopPodSandbox for \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\"" Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.386 [INFO][5037] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.387 [INFO][5037] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" iface="eth0" netns="/var/run/netns/cni-fb077324-34d3-3c9b-2c34-4b92cd0c860c" Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.387 [INFO][5037] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" iface="eth0" netns="/var/run/netns/cni-fb077324-34d3-3c9b-2c34-4b92cd0c860c" Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.387 [INFO][5037] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" iface="eth0" netns="/var/run/netns/cni-fb077324-34d3-3c9b-2c34-4b92cd0c860c" Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.387 [INFO][5037] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.387 [INFO][5037] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.416 [INFO][5055] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" HandleID="k8s-pod-network.f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.416 [INFO][5055] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.416 [INFO][5055] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.422 [WARNING][5055] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" HandleID="k8s-pod-network.f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.422 [INFO][5055] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" HandleID="k8s-pod-network.f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.423 [INFO][5055] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:37.426154 containerd[1714]: 2026-01-24 00:45:37.424 [INFO][5037] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:37.428160 containerd[1714]: time="2026-01-24T00:45:37.427562253Z" level=info msg="TearDown network for sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\" successfully" Jan 24 00:45:37.428160 containerd[1714]: time="2026-01-24T00:45:37.427772057Z" level=info msg="StopPodSandbox for \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\" returns successfully" Jan 24 00:45:37.434437 containerd[1714]: time="2026-01-24T00:45:37.434017996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4lbk4,Uid:f6f98da9-ca79-4902-82ee-3f5271b4428b,Namespace:kube-system,Attempt:1,}" Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.378 [INFO][5036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.378 [INFO][5036] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" iface="eth0" netns="/var/run/netns/cni-cad4d0c3-e360-84eb-0647-8cd14c5c7bc1" Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.379 [INFO][5036] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" iface="eth0" netns="/var/run/netns/cni-cad4d0c3-e360-84eb-0647-8cd14c5c7bc1" Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.380 [INFO][5036] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" iface="eth0" netns="/var/run/netns/cni-cad4d0c3-e360-84eb-0647-8cd14c5c7bc1" Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.380 [INFO][5036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.380 [INFO][5036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.417 [INFO][5050] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" HandleID="k8s-pod-network.e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.417 [INFO][5050] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.423 [INFO][5050] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.431 [WARNING][5050] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" HandleID="k8s-pod-network.e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.431 [INFO][5050] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" HandleID="k8s-pod-network.e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.433 [INFO][5050] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:37.436549 containerd[1714]: 2026-01-24 00:45:37.435 [INFO][5036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:37.437077 containerd[1714]: time="2026-01-24T00:45:37.436687456Z" level=info msg="TearDown network for sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\" successfully" Jan 24 00:45:37.437077 containerd[1714]: time="2026-01-24T00:45:37.436726157Z" level=info msg="StopPodSandbox for \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\" returns successfully" Jan 24 00:45:37.441625 containerd[1714]: time="2026-01-24T00:45:37.441595565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msr6b,Uid:6289d75a-fb3d-4a7e-b426-fb74d3f97fd2,Namespace:calico-system,Attempt:1,}" Jan 24 00:45:37.478501 systemd-networkd[1356]: vxlan.calico: Gained IPv6LL Jan 24 00:45:37.521837 kubelet[3160]: E0124 00:45:37.521555 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:45:37.555789 containerd[1714]: time="2026-01-24T00:45:37.555581003Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:37.560095 containerd[1714]: time="2026-01-24T00:45:37.559948401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:45:37.560095 containerd[1714]: time="2026-01-24T00:45:37.560036103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:45:37.561950 kubelet[3160]: E0124 00:45:37.560590 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:37.561950 kubelet[3160]: E0124 00:45:37.560629 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:37.561950 kubelet[3160]: E0124 00:45:37.560702 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wr4rk_calico-system(60f29bc1-01eb-4e81-a219-3085d4f87052): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:37.561950 kubelet[3160]: E0124 00:45:37.560737 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:45:37.585421 kubelet[3160]: I0124 00:45:37.585185 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-98wpg" podStartSLOduration=43.585165362 podStartE2EDuration="43.585165362s" podCreationTimestamp="2026-01-24 00:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:45:37.58463195 +0000 UTC m=+49.885750246" watchObservedRunningTime="2026-01-24 00:45:37.585165362 +0000 UTC m=+49.886283558" Jan 24 00:45:37.694735 systemd-networkd[1356]: cali3a119624823: Link UP Jan 24 00:45:37.694982 systemd-networkd[1356]: cali3a119624823: Gained carrier Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.565 [INFO][5067] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0 csi-node-driver- calico-system 6289d75a-fb3d-4a7e-b426-fb74d3f97fd2 955 0 2026-01-24 00:45:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-e69c55f9b7 csi-node-driver-msr6b eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3a119624823 [] [] }} ContainerID="f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" Namespace="calico-system" Pod="csi-node-driver-msr6b" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.566 [INFO][5067] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" Namespace="calico-system" Pod="csi-node-driver-msr6b" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.623 [INFO][5089] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" HandleID="k8s-pod-network.f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.623 [INFO][5089] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" HandleID="k8s-pod-network.f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5840), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e69c55f9b7", "pod":"csi-node-driver-msr6b", "timestamp":"2026-01-24 00:45:37.623153708 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e69c55f9b7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.624 [INFO][5089] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.624 [INFO][5089] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.624 [INFO][5089] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e69c55f9b7' Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.636 [INFO][5089] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.645 [INFO][5089] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.651 [INFO][5089] ipam/ipam.go 511: Trying affinity for 192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.653 [INFO][5089] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.656 [INFO][5089] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.656 [INFO][5089] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.663 [INFO][5089] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.672 [INFO][5089] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.684 [INFO][5089] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.69/26] block=192.168.76.64/26 handle="k8s-pod-network.f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.685 [INFO][5089] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.69/26] handle="k8s-pod-network.f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.685 [INFO][5089] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:37.736246 containerd[1714]: 2026-01-24 00:45:37.685 [INFO][5089] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.69/26] IPv6=[] ContainerID="f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" HandleID="k8s-pod-network.f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:37.738822 containerd[1714]: 2026-01-24 00:45:37.688 [INFO][5067] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" Namespace="calico-system" Pod="csi-node-driver-msr6b" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6289d75a-fb3d-4a7e-b426-fb74d3f97fd2", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"", Pod:"csi-node-driver-msr6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a119624823", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:37.738822 containerd[1714]: 2026-01-24 00:45:37.688 [INFO][5067] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.69/32] ContainerID="f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" Namespace="calico-system" Pod="csi-node-driver-msr6b" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:37.738822 containerd[1714]: 2026-01-24 00:45:37.688 [INFO][5067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a119624823 ContainerID="f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" Namespace="calico-system" Pod="csi-node-driver-msr6b" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:37.738822 containerd[1714]: 2026-01-24 00:45:37.691 [INFO][5067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" Namespace="calico-system" Pod="csi-node-driver-msr6b" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:37.738822 containerd[1714]: 2026-01-24 00:45:37.691 [INFO][5067] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" Namespace="calico-system" Pod="csi-node-driver-msr6b" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6289d75a-fb3d-4a7e-b426-fb74d3f97fd2", ResourceVersion:"955", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede", Pod:"csi-node-driver-msr6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a119624823", MAC:"be:6e:04:bd:0b:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:37.738822 containerd[1714]: 2026-01-24 00:45:37.730 [INFO][5067] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede" Namespace="calico-system" Pod="csi-node-driver-msr6b" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:37.783153 containerd[1714]: time="2026-01-24T00:45:37.782717861Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:37.783153 containerd[1714]: time="2026-01-24T00:45:37.782779662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:37.785961 containerd[1714]: time="2026-01-24T00:45:37.784463300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:37.785961 containerd[1714]: time="2026-01-24T00:45:37.784590003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:37.802177 systemd-networkd[1356]: cali110fbd756d7: Link UP Jan 24 00:45:37.802531 systemd-networkd[1356]: cali110fbd756d7: Gained carrier Jan 24 00:45:37.834907 systemd[1]: Started cri-containerd-f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede.scope - libcontainer container f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede. Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.558 [INFO][5071] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0 coredns-66bc5c9577- kube-system f6f98da9-ca79-4902-82ee-3f5271b4428b 956 0 2026-01-24 00:44:54 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-e69c55f9b7 coredns-66bc5c9577-4lbk4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali110fbd756d7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" Namespace="kube-system" Pod="coredns-66bc5c9577-4lbk4" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.559 [INFO][5071] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" Namespace="kube-system" Pod="coredns-66bc5c9577-4lbk4" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.630 [INFO][5094] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" HandleID="k8s-pod-network.1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.631 [INFO][5094] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" HandleID="k8s-pod-network.1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f1b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-e69c55f9b7", "pod":"coredns-66bc5c9577-4lbk4", "timestamp":"2026-01-24 00:45:37.630832979 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e69c55f9b7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.631 [INFO][5094] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.685 [INFO][5094] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.685 [INFO][5094] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e69c55f9b7' Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.736 [INFO][5094] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.748 [INFO][5094] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.757 [INFO][5094] ipam/ipam.go 511: Trying affinity for 192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.759 [INFO][5094] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.763 [INFO][5094] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.763 [INFO][5094] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.766 [INFO][5094] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.774 [INFO][5094] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.788 [INFO][5094] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.70/26] block=192.168.76.64/26 handle="k8s-pod-network.1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.788 [INFO][5094] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.70/26] handle="k8s-pod-network.1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.788 [INFO][5094] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:37.842367 containerd[1714]: 2026-01-24 00:45:37.788 [INFO][5094] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.70/26] IPv6=[] ContainerID="1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" HandleID="k8s-pod-network.1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:37.843279 containerd[1714]: 2026-01-24 00:45:37.793 [INFO][5071] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" Namespace="kube-system" Pod="coredns-66bc5c9577-4lbk4" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f6f98da9-ca79-4902-82ee-3f5271b4428b", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"", Pod:"coredns-66bc5c9577-4lbk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali110fbd756d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:37.843279 containerd[1714]: 2026-01-24 00:45:37.793 [INFO][5071] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.70/32] ContainerID="1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" Namespace="kube-system" Pod="coredns-66bc5c9577-4lbk4" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:37.843279 containerd[1714]: 2026-01-24 00:45:37.793 [INFO][5071] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali110fbd756d7 ContainerID="1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" Namespace="kube-system" Pod="coredns-66bc5c9577-4lbk4" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:37.843279 containerd[1714]: 2026-01-24 00:45:37.806 [INFO][5071] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" Namespace="kube-system" Pod="coredns-66bc5c9577-4lbk4" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:37.843279 containerd[1714]: 2026-01-24 00:45:37.807 [INFO][5071] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" Namespace="kube-system" Pod="coredns-66bc5c9577-4lbk4" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f6f98da9-ca79-4902-82ee-3f5271b4428b", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c", Pod:"coredns-66bc5c9577-4lbk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali110fbd756d7", MAC:"f2:d0:ec:11:a3:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:37.843704 containerd[1714]: 2026-01-24 00:45:37.831 [INFO][5071] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c" Namespace="kube-system" Pod="coredns-66bc5c9577-4lbk4" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:37.859101 systemd[1]: run-netns-cni\x2dfb077324\x2d34d3\x2d3c9b\x2d2c34\x2d4b92cd0c860c.mount: Deactivated successfully. Jan 24 00:45:37.859243 systemd[1]: run-netns-cni\x2dcad4d0c3\x2de360\x2d84eb\x2d0647\x2d8cd14c5c7bc1.mount: Deactivated successfully. Jan 24 00:45:37.898388 containerd[1714]: time="2026-01-24T00:45:37.897781323Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:37.900247 containerd[1714]: time="2026-01-24T00:45:37.899428660Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:37.905879 containerd[1714]: time="2026-01-24T00:45:37.905068286Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:37.905879 containerd[1714]: time="2026-01-24T00:45:37.905171788Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:37.945076 containerd[1714]: time="2026-01-24T00:45:37.944563765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-msr6b,Uid:6289d75a-fb3d-4a7e-b426-fb74d3f97fd2,Namespace:calico-system,Attempt:1,} returns sandbox id \"f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede\"" Jan 24 00:45:37.950483 systemd[1]: Started cri-containerd-1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c.scope - libcontainer container 1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c. Jan 24 00:45:37.958629 containerd[1714]: time="2026-01-24T00:45:37.958586577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:45:38.010044 containerd[1714]: time="2026-01-24T00:45:38.010004622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-4lbk4,Uid:f6f98da9-ca79-4902-82ee-3f5271b4428b,Namespace:kube-system,Attempt:1,} returns sandbox id \"1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c\"" Jan 24 00:45:38.019090 containerd[1714]: time="2026-01-24T00:45:38.018998922Z" level=info msg="CreateContainer within sandbox \"1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 24 00:45:38.051187 containerd[1714]: time="2026-01-24T00:45:38.050934034Z" level=info msg="CreateContainer within sandbox \"1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"94802d0bab6f23d570e9f99c58b765d55bc39f2e9406630683526db77cc07667\"" Jan 24 00:45:38.053323 containerd[1714]: time="2026-01-24T00:45:38.053294186Z" level=info msg="StartContainer for \"94802d0bab6f23d570e9f99c58b765d55bc39f2e9406630683526db77cc07667\"" Jan 24 00:45:38.056633 systemd-networkd[1356]: cali275601b9b8f: Gained IPv6LL Jan 24 00:45:38.078530 systemd[1]: Started cri-containerd-94802d0bab6f23d570e9f99c58b765d55bc39f2e9406630683526db77cc07667.scope - libcontainer container 94802d0bab6f23d570e9f99c58b765d55bc39f2e9406630683526db77cc07667. Jan 24 00:45:38.110544 containerd[1714]: time="2026-01-24T00:45:38.110495160Z" level=info msg="StartContainer for \"94802d0bab6f23d570e9f99c58b765d55bc39f2e9406630683526db77cc07667\" returns successfully" Jan 24 00:45:38.225246 containerd[1714]: time="2026-01-24T00:45:38.225197514Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:38.227921 containerd[1714]: time="2026-01-24T00:45:38.227871874Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:45:38.228207 containerd[1714]: time="2026-01-24T00:45:38.227891574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:45:38.228261 kubelet[3160]: E0124 00:45:38.228157 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:45:38.228261 kubelet[3160]: E0124 00:45:38.228211 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:45:38.228387 kubelet[3160]: E0124 00:45:38.228304 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-msr6b_calico-system(6289d75a-fb3d-4a7e-b426-fb74d3f97fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:38.229991 containerd[1714]: time="2026-01-24T00:45:38.229670614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:45:38.438519 systemd-networkd[1356]: cali4fe16ff7b82: Gained IPv6LL Jan 24 00:45:38.495486 containerd[1714]: time="2026-01-24T00:45:38.495421731Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:38.500487 containerd[1714]: time="2026-01-24T00:45:38.500203038Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:45:38.500487 containerd[1714]: time="2026-01-24T00:45:38.500302040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:45:38.500649 kubelet[3160]: E0124 00:45:38.500483 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:45:38.500649 kubelet[3160]: E0124 00:45:38.500539 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:45:38.501593 kubelet[3160]: E0124 00:45:38.500635 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-msr6b_calico-system(6289d75a-fb3d-4a7e-b426-fb74d3f97fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:38.501593 kubelet[3160]: E0124 00:45:38.500716 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:38.543810 kubelet[3160]: E0124 00:45:38.543694 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:38.548258 kubelet[3160]: E0124 00:45:38.548140 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:45:38.548258 kubelet[3160]: E0124 00:45:38.548213 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:45:38.572639 kubelet[3160]: I0124 00:45:38.572116 3160 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-4lbk4" podStartSLOduration=44.572097439 podStartE2EDuration="44.572097439s" podCreationTimestamp="2026-01-24 00:44:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-24 00:45:38.571952935 +0000 UTC m=+50.873071231" watchObservedRunningTime="2026-01-24 00:45:38.572097439 +0000 UTC m=+50.873215635" Jan 24 00:45:38.758649 systemd-networkd[1356]: calic48e80977ca: Gained IPv6LL Jan 24 00:45:39.078526 systemd-networkd[1356]: cali3a119624823: Gained IPv6LL Jan 24 00:45:39.296814 containerd[1714]: time="2026-01-24T00:45:39.296722797Z" level=info msg="StopPodSandbox for \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\"" Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.355 [INFO][5255] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.356 [INFO][5255] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" iface="eth0" netns="/var/run/netns/cni-8cad3c9a-7ef3-1db0-c289-53e5efbdec69" Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.356 [INFO][5255] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" iface="eth0" netns="/var/run/netns/cni-8cad3c9a-7ef3-1db0-c289-53e5efbdec69" Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.356 [INFO][5255] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" iface="eth0" netns="/var/run/netns/cni-8cad3c9a-7ef3-1db0-c289-53e5efbdec69" Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.357 [INFO][5255] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.357 [INFO][5255] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.377 [INFO][5263] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" HandleID="k8s-pod-network.63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.377 [INFO][5263] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.378 [INFO][5263] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.385 [WARNING][5263] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" HandleID="k8s-pod-network.63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.385 [INFO][5263] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" HandleID="k8s-pod-network.63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.386 [INFO][5263] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:39.389051 containerd[1714]: 2026-01-24 00:45:39.387 [INFO][5255] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:39.389848 containerd[1714]: time="2026-01-24T00:45:39.389109976Z" level=info msg="TearDown network for sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\" successfully" Jan 24 00:45:39.389848 containerd[1714]: time="2026-01-24T00:45:39.389142377Z" level=info msg="StopPodSandbox for \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\" returns successfully" Jan 24 00:45:39.393817 systemd[1]: run-netns-cni\x2d8cad3c9a\x2d7ef3\x2d1db0\x2dc289\x2d53e5efbdec69.mount: Deactivated successfully. Jan 24 00:45:39.399244 systemd-networkd[1356]: cali110fbd756d7: Gained IPv6LL Jan 24 00:45:39.399824 containerd[1714]: time="2026-01-24T00:45:39.399782817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5598cf5ccb-2mj7w,Uid:ea8e1ae1-59b4-45f9-9265-2981e79d3abb,Namespace:calico-system,Attempt:1,}" Jan 24 00:45:39.540697 systemd-networkd[1356]: cali274192c5ea8: Link UP Jan 24 00:45:39.540913 systemd-networkd[1356]: cali274192c5ea8: Gained carrier Jan 24 00:45:39.556994 kubelet[3160]: E0124 00:45:39.556855 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.466 [INFO][5270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0 calico-kube-controllers-5598cf5ccb- calico-system ea8e1ae1-59b4-45f9-9265-2981e79d3abb 1012 0 2026-01-24 00:45:10 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5598cf5ccb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-e69c55f9b7 calico-kube-controllers-5598cf5ccb-2mj7w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali274192c5ea8 [] [] }} ContainerID="ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" Namespace="calico-system" Pod="calico-kube-controllers-5598cf5ccb-2mj7w" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.466 [INFO][5270] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" Namespace="calico-system" Pod="calico-kube-controllers-5598cf5ccb-2mj7w" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.490 [INFO][5281] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" HandleID="k8s-pod-network.ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.491 [INFO][5281] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" HandleID="k8s-pod-network.ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-e69c55f9b7", "pod":"calico-kube-controllers-5598cf5ccb-2mj7w", "timestamp":"2026-01-24 00:45:39.490687162 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e69c55f9b7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.491 [INFO][5281] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.491 [INFO][5281] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.491 [INFO][5281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e69c55f9b7' Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.497 [INFO][5281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.504 [INFO][5281] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.509 [INFO][5281] ipam/ipam.go 511: Trying affinity for 192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.510 [INFO][5281] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.513 [INFO][5281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.513 [INFO][5281] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.514 [INFO][5281] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25 Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.518 [INFO][5281] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.531 [INFO][5281] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.71/26] block=192.168.76.64/26 handle="k8s-pod-network.ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.531 [INFO][5281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.71/26] handle="k8s-pod-network.ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.531 [INFO][5281] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:39.581153 containerd[1714]: 2026-01-24 00:45:39.531 [INFO][5281] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.71/26] IPv6=[] ContainerID="ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" HandleID="k8s-pod-network.ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:39.582048 containerd[1714]: 2026-01-24 00:45:39.534 [INFO][5270] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" Namespace="calico-system" Pod="calico-kube-controllers-5598cf5ccb-2mj7w" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0", GenerateName:"calico-kube-controllers-5598cf5ccb-", Namespace:"calico-system", SelfLink:"", UID:"ea8e1ae1-59b4-45f9-9265-2981e79d3abb", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5598cf5ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"", Pod:"calico-kube-controllers-5598cf5ccb-2mj7w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali274192c5ea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:39.582048 containerd[1714]: 2026-01-24 00:45:39.534 [INFO][5270] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.71/32] ContainerID="ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" Namespace="calico-system" Pod="calico-kube-controllers-5598cf5ccb-2mj7w" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:39.582048 containerd[1714]: 2026-01-24 00:45:39.534 [INFO][5270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali274192c5ea8 ContainerID="ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" Namespace="calico-system" Pod="calico-kube-controllers-5598cf5ccb-2mj7w" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:39.582048 containerd[1714]: 2026-01-24 00:45:39.537 [INFO][5270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" Namespace="calico-system" Pod="calico-kube-controllers-5598cf5ccb-2mj7w" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:39.582048 containerd[1714]: 2026-01-24 00:45:39.544 [INFO][5270] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" Namespace="calico-system" Pod="calico-kube-controllers-5598cf5ccb-2mj7w" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0", GenerateName:"calico-kube-controllers-5598cf5ccb-", Namespace:"calico-system", SelfLink:"", UID:"ea8e1ae1-59b4-45f9-9265-2981e79d3abb", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5598cf5ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25", Pod:"calico-kube-controllers-5598cf5ccb-2mj7w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali274192c5ea8", MAC:"72:9b:f3:8f:62:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:39.582048 containerd[1714]: 2026-01-24 00:45:39.577 [INFO][5270] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25" Namespace="calico-system" Pod="calico-kube-controllers-5598cf5ccb-2mj7w" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:39.626924 containerd[1714]: time="2026-01-24T00:45:39.626815626Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:39.627089 containerd[1714]: time="2026-01-24T00:45:39.626938129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:39.627089 containerd[1714]: time="2026-01-24T00:45:39.626972630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:39.627488 containerd[1714]: time="2026-01-24T00:45:39.627090532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:39.669850 systemd[1]: Started cri-containerd-ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25.scope - libcontainer container ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25. Jan 24 00:45:39.766715 containerd[1714]: time="2026-01-24T00:45:39.766671973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5598cf5ccb-2mj7w,Uid:ea8e1ae1-59b4-45f9-9265-2981e79d3abb,Namespace:calico-system,Attempt:1,} returns sandbox id \"ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25\"" Jan 24 00:45:39.769196 containerd[1714]: time="2026-01-24T00:45:39.768864123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:45:40.044423 containerd[1714]: time="2026-01-24T00:45:40.044248820Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:40.047569 containerd[1714]: time="2026-01-24T00:45:40.047505694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:45:40.047702 containerd[1714]: time="2026-01-24T00:45:40.047611396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:45:40.047905 kubelet[3160]: E0124 00:45:40.047833 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:45:40.047905 kubelet[3160]: E0124 00:45:40.047890 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:45:40.048085 kubelet[3160]: E0124 00:45:40.048006 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5598cf5ccb-2mj7w_calico-system(ea8e1ae1-59b4-45f9-9265-2981e79d3abb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:40.048085 kubelet[3160]: E0124 00:45:40.048057 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:45:40.557356 kubelet[3160]: E0124 00:45:40.557285 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:45:40.742613 systemd-networkd[1356]: cali274192c5ea8: Gained IPv6LL Jan 24 00:45:41.297485 containerd[1714]: time="2026-01-24T00:45:41.296168695Z" level=info msg="StopPodSandbox for \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\"" Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.347 [INFO][5350] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.347 [INFO][5350] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" iface="eth0" netns="/var/run/netns/cni-9fa2f868-72c5-b8c5-cf26-dbcf25cebafa" Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.348 [INFO][5350] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" iface="eth0" netns="/var/run/netns/cni-9fa2f868-72c5-b8c5-cf26-dbcf25cebafa" Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.348 [INFO][5350] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" iface="eth0" netns="/var/run/netns/cni-9fa2f868-72c5-b8c5-cf26-dbcf25cebafa" Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.348 [INFO][5350] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.348 [INFO][5350] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.383 [INFO][5359] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" HandleID="k8s-pod-network.a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.383 [INFO][5359] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.384 [INFO][5359] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.393 [WARNING][5359] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" HandleID="k8s-pod-network.a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.393 [INFO][5359] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" HandleID="k8s-pod-network.a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.394 [INFO][5359] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:41.399358 containerd[1714]: 2026-01-24 00:45:41.396 [INFO][5350] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:41.400177 containerd[1714]: time="2026-01-24T00:45:41.400013632Z" level=info msg="TearDown network for sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\" successfully" Jan 24 00:45:41.400177 containerd[1714]: time="2026-01-24T00:45:41.400058033Z" level=info msg="StopPodSandbox for \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\" returns successfully" Jan 24 00:45:41.404083 systemd[1]: run-netns-cni\x2d9fa2f868\x2d72c5\x2db8c5\x2dcf26\x2ddbcf25cebafa.mount: Deactivated successfully. Jan 24 00:45:41.407827 containerd[1714]: time="2026-01-24T00:45:41.407421499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64999767c9-nk8rp,Uid:7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9,Namespace:calico-apiserver,Attempt:1,}" Jan 24 00:45:41.562444 kubelet[3160]: E0124 00:45:41.562144 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:45:41.572519 systemd-networkd[1356]: cali435cbc91fcd: Link UP Jan 24 00:45:41.572760 systemd-networkd[1356]: cali435cbc91fcd: Gained carrier Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.482 [INFO][5366] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0 calico-apiserver-64999767c9- calico-apiserver 7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9 1032 0 2026-01-24 00:45:05 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64999767c9 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-e69c55f9b7 calico-apiserver-64999767c9-nk8rp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali435cbc91fcd [] [] }} ContainerID="38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-nk8rp" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.483 [INFO][5366] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-nk8rp" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.511 [INFO][5377] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" HandleID="k8s-pod-network.38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.511 [INFO][5377] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" HandleID="k8s-pod-network.38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cefe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-e69c55f9b7", "pod":"calico-apiserver-64999767c9-nk8rp", "timestamp":"2026-01-24 00:45:41.511203034 +0000 UTC"}, Hostname:"ci-4081.3.6-n-e69c55f9b7", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.511 [INFO][5377] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.511 [INFO][5377] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.511 [INFO][5377] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-e69c55f9b7' Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.517 [INFO][5377] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.522 [INFO][5377] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.525 [INFO][5377] ipam/ipam.go 511: Trying affinity for 192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.527 [INFO][5377] ipam/ipam.go 158: Attempting to load block cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.529 [INFO][5377] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.76.64/26 host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.529 [INFO][5377] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.76.64/26 handle="k8s-pod-network.38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.530 [INFO][5377] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81 Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.539 [INFO][5377] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.76.64/26 handle="k8s-pod-network.38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.561 [INFO][5377] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.76.72/26] block=192.168.76.64/26 handle="k8s-pod-network.38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.562 [INFO][5377] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.76.72/26] handle="k8s-pod-network.38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" host="ci-4081.3.6-n-e69c55f9b7" Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.562 [INFO][5377] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:41.601384 containerd[1714]: 2026-01-24 00:45:41.562 [INFO][5377] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.76.72/26] IPv6=[] ContainerID="38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" HandleID="k8s-pod-network.38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:41.603387 containerd[1714]: 2026-01-24 00:45:41.564 [INFO][5366] cni-plugin/k8s.go 418: Populated endpoint ContainerID="38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-nk8rp" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0", GenerateName:"calico-apiserver-64999767c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64999767c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"", Pod:"calico-apiserver-64999767c9-nk8rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali435cbc91fcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:41.603387 containerd[1714]: 2026-01-24 00:45:41.564 [INFO][5366] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.76.72/32] ContainerID="38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-nk8rp" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:41.603387 containerd[1714]: 2026-01-24 00:45:41.564 [INFO][5366] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali435cbc91fcd ContainerID="38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-nk8rp" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:41.603387 containerd[1714]: 2026-01-24 00:45:41.568 [INFO][5366] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-nk8rp" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:41.603387 containerd[1714]: 2026-01-24 00:45:41.569 [INFO][5366] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-nk8rp" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0", GenerateName:"calico-apiserver-64999767c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64999767c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81", Pod:"calico-apiserver-64999767c9-nk8rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali435cbc91fcd", MAC:"6a:bd:1d:a0:38:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:41.603387 containerd[1714]: 2026-01-24 00:45:41.596 [INFO][5366] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81" Namespace="calico-apiserver" Pod="calico-apiserver-64999767c9-nk8rp" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:41.624531 containerd[1714]: time="2026-01-24T00:45:41.624371681Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 24 00:45:41.624531 containerd[1714]: time="2026-01-24T00:45:41.624430382Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 24 00:45:41.624531 containerd[1714]: time="2026-01-24T00:45:41.624449383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:41.625596 containerd[1714]: time="2026-01-24T00:45:41.625461606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 24 00:45:41.660517 systemd[1]: Started cri-containerd-38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81.scope - libcontainer container 38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81. Jan 24 00:45:41.705722 containerd[1714]: time="2026-01-24T00:45:41.705678411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64999767c9-nk8rp,Uid:7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81\"" Jan 24 00:45:41.712293 containerd[1714]: time="2026-01-24T00:45:41.712247359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:41.977131 containerd[1714]: time="2026-01-24T00:45:41.976916715Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:41.980932 containerd[1714]: time="2026-01-24T00:45:41.980808603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:41.981164 containerd[1714]: time="2026-01-24T00:45:41.980864404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:45:41.981919 kubelet[3160]: E0124 00:45:41.981488 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:41.981919 kubelet[3160]: E0124 00:45:41.981534 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:41.981919 kubelet[3160]: E0124 00:45:41.981604 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64999767c9-nk8rp_calico-apiserver(7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:41.981919 kubelet[3160]: E0124 00:45:41.981635 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:45:42.570168 kubelet[3160]: E0124 00:45:42.569883 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:45:43.558731 systemd-networkd[1356]: cali435cbc91fcd: Gained IPv6LL Jan 24 00:45:43.572866 kubelet[3160]: E0124 00:45:43.572412 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:45:48.281710 containerd[1714]: time="2026-01-24T00:45:48.281660055Z" level=info msg="StopPodSandbox for \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\"" Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.317 [WARNING][5454] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"60f29bc1-01eb-4e81-a219-3085d4f87052", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b", Pod:"goldmane-7c778bb748-wr4rk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fe16ff7b82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.317 [INFO][5454] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.317 [INFO][5454] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" iface="eth0" netns="" Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.317 [INFO][5454] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.317 [INFO][5454] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.338 [INFO][5463] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" HandleID="k8s-pod-network.6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.339 [INFO][5463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.339 [INFO][5463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.345 [WARNING][5463] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" HandleID="k8s-pod-network.6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.345 [INFO][5463] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" HandleID="k8s-pod-network.6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.346 [INFO][5463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:48.348792 containerd[1714]: 2026-01-24 00:45:48.347 [INFO][5454] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:48.349950 containerd[1714]: time="2026-01-24T00:45:48.348860477Z" level=info msg="TearDown network for sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\" successfully" Jan 24 00:45:48.349950 containerd[1714]: time="2026-01-24T00:45:48.348914378Z" level=info msg="StopPodSandbox for \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\" returns successfully" Jan 24 00:45:48.349950 containerd[1714]: time="2026-01-24T00:45:48.349536292Z" level=info msg="RemovePodSandbox for \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\"" Jan 24 00:45:48.349950 containerd[1714]: time="2026-01-24T00:45:48.349569093Z" level=info msg="Forcibly stopping sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\"" Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.382 [WARNING][5477] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"60f29bc1-01eb-4e81-a219-3085d4f87052", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"7eeb29846a207781edcb4692c58b5623afa0c95ade39e6adacfdd25f8717d82b", Pod:"goldmane-7c778bb748-wr4rk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.76.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4fe16ff7b82", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.383 [INFO][5477] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.383 [INFO][5477] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" iface="eth0" netns="" Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.383 [INFO][5477] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.383 [INFO][5477] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.407 [INFO][5484] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" HandleID="k8s-pod-network.6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.407 [INFO][5484] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.407 [INFO][5484] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.412 [WARNING][5484] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" HandleID="k8s-pod-network.6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.412 [INFO][5484] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" HandleID="k8s-pod-network.6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-goldmane--7c778bb748--wr4rk-eth0" Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.414 [INFO][5484] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:48.416684 containerd[1714]: 2026-01-24 00:45:48.415 [INFO][5477] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd" Jan 24 00:45:48.417357 containerd[1714]: time="2026-01-24T00:45:48.416726414Z" level=info msg="TearDown network for sandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\" successfully" Jan 24 00:45:48.423379 containerd[1714]: time="2026-01-24T00:45:48.423334864Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:45:48.423497 containerd[1714]: time="2026-01-24T00:45:48.423414666Z" level=info msg="RemovePodSandbox \"6734ca72996dc4c556c1652c1ac2128534964ff5d6724e80da62890609df24cd\" returns successfully" Jan 24 00:45:48.423976 containerd[1714]: time="2026-01-24T00:45:48.423944178Z" level=info msg="StopPodSandbox for \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\"" Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.457 [WARNING][5498] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0", GenerateName:"calico-apiserver-64999767c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64999767c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81", Pod:"calico-apiserver-64999767c9-nk8rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali435cbc91fcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.458 [INFO][5498] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.458 [INFO][5498] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" iface="eth0" netns="" Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.458 [INFO][5498] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.458 [INFO][5498] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.479 [INFO][5505] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" HandleID="k8s-pod-network.a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.479 [INFO][5505] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.479 [INFO][5505] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.486 [WARNING][5505] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" HandleID="k8s-pod-network.a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.486 [INFO][5505] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" HandleID="k8s-pod-network.a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.487 [INFO][5505] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:48.490228 containerd[1714]: 2026-01-24 00:45:48.488 [INFO][5498] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:48.491121 containerd[1714]: time="2026-01-24T00:45:48.490289680Z" level=info msg="TearDown network for sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\" successfully" Jan 24 00:45:48.491121 containerd[1714]: time="2026-01-24T00:45:48.490321681Z" level=info msg="StopPodSandbox for \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\" returns successfully" Jan 24 00:45:48.491121 containerd[1714]: time="2026-01-24T00:45:48.490989696Z" level=info msg="RemovePodSandbox for \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\"" Jan 24 00:45:48.491121 containerd[1714]: time="2026-01-24T00:45:48.491024897Z" level=info msg="Forcibly stopping sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\"" Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.524 [WARNING][5520] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0", GenerateName:"calico-apiserver-64999767c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9", ResourceVersion:"1054", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64999767c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"38ef85a1e80f00ae42e95f33737462fa5153e66b7cc301ae89ee28427d764e81", Pod:"calico-apiserver-64999767c9-nk8rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali435cbc91fcd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.525 [INFO][5520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.525 [INFO][5520] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" iface="eth0" netns="" Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.525 [INFO][5520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.525 [INFO][5520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.556 [INFO][5527] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" HandleID="k8s-pod-network.a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.557 [INFO][5527] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.557 [INFO][5527] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.567 [WARNING][5527] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" HandleID="k8s-pod-network.a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.567 [INFO][5527] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" HandleID="k8s-pod-network.a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--nk8rp-eth0" Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.568 [INFO][5527] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:48.571404 containerd[1714]: 2026-01-24 00:45:48.569 [INFO][5520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d" Jan 24 00:45:48.571404 containerd[1714]: time="2026-01-24T00:45:48.570994708Z" level=info msg="TearDown network for sandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\" successfully" Jan 24 00:45:48.579146 containerd[1714]: time="2026-01-24T00:45:48.579100091Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:45:48.579310 containerd[1714]: time="2026-01-24T00:45:48.579171293Z" level=info msg="RemovePodSandbox \"a7bf2babf1d1d55e435578a789346be6c0a90c70c2c87a41cda52a664a13610d\" returns successfully" Jan 24 00:45:48.579676 containerd[1714]: time="2026-01-24T00:45:48.579644504Z" level=info msg="StopPodSandbox for \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\"" Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.615 [WARNING][5542] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f3b0e4f7-4203-4a9e-8024-a24d4365a71d", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e", Pod:"coredns-66bc5c9577-98wpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic48e80977ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.616 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.616 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" iface="eth0" netns="" Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.616 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.616 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.638 [INFO][5549] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" HandleID="k8s-pod-network.057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.638 [INFO][5549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.638 [INFO][5549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.644 [WARNING][5549] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" HandleID="k8s-pod-network.057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.644 [INFO][5549] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" HandleID="k8s-pod-network.057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.645 [INFO][5549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:48.648229 containerd[1714]: 2026-01-24 00:45:48.646 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:48.648914 containerd[1714]: time="2026-01-24T00:45:48.648339559Z" level=info msg="TearDown network for sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\" successfully" Jan 24 00:45:48.648914 containerd[1714]: time="2026-01-24T00:45:48.648397061Z" level=info msg="StopPodSandbox for \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\" returns successfully" Jan 24 00:45:48.649485 containerd[1714]: time="2026-01-24T00:45:48.649451385Z" level=info msg="RemovePodSandbox for \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\"" Jan 24 00:45:48.649593 containerd[1714]: time="2026-01-24T00:45:48.649490085Z" level=info msg="Forcibly stopping sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\"" Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.683 [WARNING][5563] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f3b0e4f7-4203-4a9e-8024-a24d4365a71d", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"da979a133679a284c6e4d65405b18486464069094b7347f5589d101f407f875e", Pod:"coredns-66bc5c9577-98wpg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic48e80977ca", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.683 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.683 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" iface="eth0" netns="" Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.683 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.683 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.704 [INFO][5570] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" HandleID="k8s-pod-network.057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.704 [INFO][5570] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.704 [INFO][5570] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.710 [WARNING][5570] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" HandleID="k8s-pod-network.057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.710 [INFO][5570] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" HandleID="k8s-pod-network.057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--98wpg-eth0" Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.711 [INFO][5570] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:48.715476 containerd[1714]: 2026-01-24 00:45:48.712 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710" Jan 24 00:45:48.715476 containerd[1714]: time="2026-01-24T00:45:48.714133649Z" level=info msg="TearDown network for sandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\" successfully" Jan 24 00:45:48.720475 containerd[1714]: time="2026-01-24T00:45:48.720439592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:45:48.720585 containerd[1714]: time="2026-01-24T00:45:48.720519094Z" level=info msg="RemovePodSandbox \"057929440d8d8d2371c66f801eafaa1c102288fa94556c8e408207d2c8478710\" returns successfully" Jan 24 00:45:48.721087 containerd[1714]: time="2026-01-24T00:45:48.721059006Z" level=info msg="StopPodSandbox for \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\"" Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.752 [WARNING][5584] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6289d75a-fb3d-4a7e-b426-fb74d3f97fd2", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede", Pod:"csi-node-driver-msr6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a119624823", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.752 [INFO][5584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.752 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" iface="eth0" netns="" Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.752 [INFO][5584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.752 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.772 [INFO][5591] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" HandleID="k8s-pod-network.e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.772 [INFO][5591] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.772 [INFO][5591] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.778 [WARNING][5591] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" HandleID="k8s-pod-network.e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.778 [INFO][5591] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" HandleID="k8s-pod-network.e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.780 [INFO][5591] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:48.782771 containerd[1714]: 2026-01-24 00:45:48.781 [INFO][5584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:48.783736 containerd[1714]: time="2026-01-24T00:45:48.782797704Z" level=info msg="TearDown network for sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\" successfully" Jan 24 00:45:48.783736 containerd[1714]: time="2026-01-24T00:45:48.782828605Z" level=info msg="StopPodSandbox for \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\" returns successfully" Jan 24 00:45:48.783736 containerd[1714]: time="2026-01-24T00:45:48.783444419Z" level=info msg="RemovePodSandbox for \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\"" Jan 24 00:45:48.783736 containerd[1714]: time="2026-01-24T00:45:48.783479820Z" level=info msg="Forcibly stopping sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\"" Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.813 [WARNING][5605] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6289d75a-fb3d-4a7e-b426-fb74d3f97fd2", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"f70a23ca18f6d8366070a169cf46ee5ea7e44344f2b35a4b45d2d54180beeede", Pod:"csi-node-driver-msr6b", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.76.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3a119624823", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.814 [INFO][5605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.814 [INFO][5605] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" iface="eth0" netns="" Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.814 [INFO][5605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.814 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.836 [INFO][5612] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" HandleID="k8s-pod-network.e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.836 [INFO][5612] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.836 [INFO][5612] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.842 [WARNING][5612] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" HandleID="k8s-pod-network.e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.842 [INFO][5612] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" HandleID="k8s-pod-network.e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-csi--node--driver--msr6b-eth0" Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.843 [INFO][5612] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:48.845881 containerd[1714]: 2026-01-24 00:45:48.844 [INFO][5605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298" Jan 24 00:45:48.846525 containerd[1714]: time="2026-01-24T00:45:48.845868433Z" level=info msg="TearDown network for sandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\" successfully" Jan 24 00:45:48.856079 containerd[1714]: time="2026-01-24T00:45:48.856044663Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:45:48.856182 containerd[1714]: time="2026-01-24T00:45:48.856111265Z" level=info msg="RemovePodSandbox \"e660e55da3a97dbbb9f65f91a8a5cafad52603d1c92b63ee8d3e67c9ab0f6298\" returns successfully" Jan 24 00:45:48.856720 containerd[1714]: time="2026-01-24T00:45:48.856672877Z" level=info msg="StopPodSandbox for \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\"" Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.888 [WARNING][5627] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f6f98da9-ca79-4902-82ee-3f5271b4428b", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c", Pod:"coredns-66bc5c9577-4lbk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali110fbd756d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.888 [INFO][5627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.888 [INFO][5627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" iface="eth0" netns="" Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.888 [INFO][5627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.888 [INFO][5627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.907 [INFO][5634] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" HandleID="k8s-pod-network.f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.907 [INFO][5634] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.907 [INFO][5634] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.915 [WARNING][5634] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" HandleID="k8s-pod-network.f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.915 [INFO][5634] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" HandleID="k8s-pod-network.f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.916 [INFO][5634] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:48.918966 containerd[1714]: 2026-01-24 00:45:48.917 [INFO][5627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:48.919900 containerd[1714]: time="2026-01-24T00:45:48.918984689Z" level=info msg="TearDown network for sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\" successfully" Jan 24 00:45:48.919900 containerd[1714]: time="2026-01-24T00:45:48.919020089Z" level=info msg="StopPodSandbox for \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\" returns successfully" Jan 24 00:45:48.919900 containerd[1714]: time="2026-01-24T00:45:48.919622803Z" level=info msg="RemovePodSandbox for \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\"" Jan 24 00:45:48.919900 containerd[1714]: time="2026-01-24T00:45:48.919659804Z" level=info msg="Forcibly stopping sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\"" Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.951 [WARNING][5648] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"f6f98da9-ca79-4902-82ee-3f5271b4428b", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 44, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"1e316b4d0316f9025ec60e52062863efd414999dd7473ed13dc93e57f84c2e5c", Pod:"coredns-66bc5c9577-4lbk4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.76.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali110fbd756d7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.951 [INFO][5648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.951 [INFO][5648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" iface="eth0" netns="" Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.951 [INFO][5648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.952 [INFO][5648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.974 [INFO][5655] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" HandleID="k8s-pod-network.f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.974 [INFO][5655] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.974 [INFO][5655] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.980 [WARNING][5655] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" HandleID="k8s-pod-network.f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.980 [INFO][5655] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" HandleID="k8s-pod-network.f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-coredns--66bc5c9577--4lbk4-eth0" Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.981 [INFO][5655] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:48.986351 containerd[1714]: 2026-01-24 00:45:48.982 [INFO][5648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f" Jan 24 00:45:48.986351 containerd[1714]: time="2026-01-24T00:45:48.985564796Z" level=info msg="TearDown network for sandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\" successfully" Jan 24 00:45:48.995066 containerd[1714]: time="2026-01-24T00:45:48.995027011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:45:48.995186 containerd[1714]: time="2026-01-24T00:45:48.995090212Z" level=info msg="RemovePodSandbox \"f54de3569cbfbe6f24a0ba4006db8dd70c9453fed9430a624673da4aea09bd6f\" returns successfully" Jan 24 00:45:48.995695 containerd[1714]: time="2026-01-24T00:45:48.995598724Z" level=info msg="StopPodSandbox for \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\"" Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.033 [WARNING][5669] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0", GenerateName:"calico-apiserver-64999767c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b4d50cd-bfa9-4817-b2aa-6df460bb529b", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64999767c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4", Pod:"calico-apiserver-64999767c9-w9j7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali275601b9b8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.033 [INFO][5669] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.033 [INFO][5669] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" iface="eth0" netns="" Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.033 [INFO][5669] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.033 [INFO][5669] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.054 [INFO][5676] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" HandleID="k8s-pod-network.eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.055 [INFO][5676] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.055 [INFO][5676] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.060 [WARNING][5676] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" HandleID="k8s-pod-network.eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.060 [INFO][5676] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" HandleID="k8s-pod-network.eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.061 [INFO][5676] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:49.064513 containerd[1714]: 2026-01-24 00:45:49.063 [INFO][5669] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:49.065900 containerd[1714]: time="2026-01-24T00:45:49.064557185Z" level=info msg="TearDown network for sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\" successfully" Jan 24 00:45:49.065900 containerd[1714]: time="2026-01-24T00:45:49.064587886Z" level=info msg="StopPodSandbox for \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\" returns successfully" Jan 24 00:45:49.065900 containerd[1714]: time="2026-01-24T00:45:49.065097898Z" level=info msg="RemovePodSandbox for \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\"" Jan 24 00:45:49.065900 containerd[1714]: time="2026-01-24T00:45:49.065130298Z" level=info msg="Forcibly stopping sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\"" Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.106 [WARNING][5690] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0", GenerateName:"calico-apiserver-64999767c9-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b4d50cd-bfa9-4817-b2aa-6df460bb529b", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64999767c9", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"2df4769798c1bab5e352f6a5333ee1ed777ce0a1f76d83ae3e1d6a9dee1b61b4", Pod:"calico-apiserver-64999767c9-w9j7d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.76.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali275601b9b8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.107 [INFO][5690] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.107 [INFO][5690] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" iface="eth0" netns="" Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.107 [INFO][5690] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.107 [INFO][5690] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.126 [INFO][5697] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" HandleID="k8s-pod-network.eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.127 [INFO][5697] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.127 [INFO][5697] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.133 [WARNING][5697] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" HandleID="k8s-pod-network.eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.133 [INFO][5697] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" HandleID="k8s-pod-network.eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--apiserver--64999767c9--w9j7d-eth0" Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.134 [INFO][5697] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:49.137068 containerd[1714]: 2026-01-24 00:45:49.135 [INFO][5690] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef" Jan 24 00:45:49.137068 containerd[1714]: time="2026-01-24T00:45:49.137046727Z" level=info msg="TearDown network for sandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\" successfully" Jan 24 00:45:49.145472 containerd[1714]: time="2026-01-24T00:45:49.145289214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:45:49.145472 containerd[1714]: time="2026-01-24T00:45:49.145370016Z" level=info msg="RemovePodSandbox \"eccc7a1b9cba24a96e92abcce8c25c4a0a36b1650dc9f383b00e75696139dcef\" returns successfully" Jan 24 00:45:49.145945 containerd[1714]: time="2026-01-24T00:45:49.145919128Z" level=info msg="StopPodSandbox for \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\"" Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.182 [WARNING][5711] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--5784f66b48--622qn-eth0" Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.182 [INFO][5711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.182 [INFO][5711] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" iface="eth0" netns="" Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.182 [INFO][5711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.182 [INFO][5711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.201 [INFO][5718] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" HandleID="k8s-pod-network.47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--5784f66b48--622qn-eth0" Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.202 [INFO][5718] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.202 [INFO][5718] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.210 [WARNING][5718] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" HandleID="k8s-pod-network.47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--5784f66b48--622qn-eth0" Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.210 [INFO][5718] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" HandleID="k8s-pod-network.47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--5784f66b48--622qn-eth0" Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.211 [INFO][5718] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:49.213958 containerd[1714]: 2026-01-24 00:45:49.212 [INFO][5711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:49.214693 containerd[1714]: time="2026-01-24T00:45:49.214073671Z" level=info msg="TearDown network for sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\" successfully" Jan 24 00:45:49.214693 containerd[1714]: time="2026-01-24T00:45:49.214103672Z" level=info msg="StopPodSandbox for \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\" returns successfully" Jan 24 00:45:49.214776 containerd[1714]: time="2026-01-24T00:45:49.214730886Z" level=info msg="RemovePodSandbox for \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\"" Jan 24 00:45:49.214776 containerd[1714]: time="2026-01-24T00:45:49.214762087Z" level=info msg="Forcibly stopping sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\"" Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.245 [WARNING][5732] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" WorkloadEndpoint="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--5784f66b48--622qn-eth0" Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.245 [INFO][5732] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.245 [INFO][5732] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" iface="eth0" netns="" Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.245 [INFO][5732] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.245 [INFO][5732] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.267 [INFO][5739] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" HandleID="k8s-pod-network.47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--5784f66b48--622qn-eth0" Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.267 [INFO][5739] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.267 [INFO][5739] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.272 [WARNING][5739] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" HandleID="k8s-pod-network.47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--5784f66b48--622qn-eth0" Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.273 [INFO][5739] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" HandleID="k8s-pod-network.47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-whisker--5784f66b48--622qn-eth0" Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.274 [INFO][5739] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:49.276367 containerd[1714]: 2026-01-24 00:45:49.275 [INFO][5732] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122" Jan 24 00:45:49.276367 containerd[1714]: time="2026-01-24T00:45:49.276295181Z" level=info msg="TearDown network for sandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\" successfully" Jan 24 00:45:49.285576 containerd[1714]: time="2026-01-24T00:45:49.285533290Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:45:49.285957 containerd[1714]: time="2026-01-24T00:45:49.285592991Z" level=info msg="RemovePodSandbox \"47404b7444a37607c538594eb6710b6f3d11dd57f30cf5b81b765a6fad85e122\" returns successfully" Jan 24 00:45:49.286127 containerd[1714]: time="2026-01-24T00:45:49.286101203Z" level=info msg="StopPodSandbox for \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\"" Jan 24 00:45:49.298371 containerd[1714]: time="2026-01-24T00:45:49.298149175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.340 [WARNING][5753] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0", GenerateName:"calico-kube-controllers-5598cf5ccb-", Namespace:"calico-system", SelfLink:"", UID:"ea8e1ae1-59b4-45f9-9265-2981e79d3abb", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5598cf5ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25", Pod:"calico-kube-controllers-5598cf5ccb-2mj7w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali274192c5ea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.340 [INFO][5753] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.340 [INFO][5753] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" iface="eth0" netns="" Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.340 [INFO][5753] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.340 [INFO][5753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.363 [INFO][5760] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" HandleID="k8s-pod-network.63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.363 [INFO][5760] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.363 [INFO][5760] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.368 [WARNING][5760] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" HandleID="k8s-pod-network.63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.368 [INFO][5760] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" HandleID="k8s-pod-network.63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.369 [INFO][5760] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:49.372171 containerd[1714]: 2026-01-24 00:45:49.371 [INFO][5753] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:49.372861 containerd[1714]: time="2026-01-24T00:45:49.372737765Z" level=info msg="TearDown network for sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\" successfully" Jan 24 00:45:49.372861 containerd[1714]: time="2026-01-24T00:45:49.372768465Z" level=info msg="StopPodSandbox for \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\" returns successfully" Jan 24 00:45:49.373293 containerd[1714]: time="2026-01-24T00:45:49.373266977Z" level=info msg="RemovePodSandbox for \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\"" Jan 24 00:45:49.373415 containerd[1714]: time="2026-01-24T00:45:49.373296177Z" level=info msg="Forcibly stopping sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\"" Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.407 [WARNING][5774] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0", GenerateName:"calico-kube-controllers-5598cf5ccb-", Namespace:"calico-system", SelfLink:"", UID:"ea8e1ae1-59b4-45f9-9265-2981e79d3abb", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2026, time.January, 24, 0, 45, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5598cf5ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-e69c55f9b7", ContainerID:"ccb5c888eeaafcf508e87c6305d3e7cfec477e489068147977ed344cd74b2a25", Pod:"calico-kube-controllers-5598cf5ccb-2mj7w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.76.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali274192c5ea8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.407 [INFO][5774] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.407 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" iface="eth0" netns="" Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.407 [INFO][5774] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.407 [INFO][5774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.427 [INFO][5781] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" HandleID="k8s-pod-network.63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.427 [INFO][5781] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.427 [INFO][5781] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.433 [WARNING][5781] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" HandleID="k8s-pod-network.63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.433 [INFO][5781] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" HandleID="k8s-pod-network.63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Workload="ci--4081.3.6--n--e69c55f9b7-k8s-calico--kube--controllers--5598cf5ccb--2mj7w-eth0" Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.434 [INFO][5781] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 24 00:45:49.437367 containerd[1714]: 2026-01-24 00:45:49.435 [INFO][5774] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63" Jan 24 00:45:49.437367 containerd[1714]: time="2026-01-24T00:45:49.437314127Z" level=info msg="TearDown network for sandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\" successfully" Jan 24 00:45:49.445466 containerd[1714]: time="2026-01-24T00:45:49.445431211Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 24 00:45:49.445594 containerd[1714]: time="2026-01-24T00:45:49.445491312Z" level=info msg="RemovePodSandbox \"63baaadc6a53f0f0fa0d637e1d39cf2332b2877ef9227f6c5b550ad7bab8eb63\" returns successfully" Jan 24 00:45:49.601503 containerd[1714]: time="2026-01-24T00:45:49.601307341Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:49.603869 containerd[1714]: time="2026-01-24T00:45:49.603823398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:45:49.604006 containerd[1714]: time="2026-01-24T00:45:49.603897700Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:45:49.604346 kubelet[3160]: E0124 00:45:49.604055 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:49.604346 kubelet[3160]: E0124 00:45:49.604089 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:45:49.604346 kubelet[3160]: E0124 00:45:49.604169 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wr4rk_calico-system(60f29bc1-01eb-4e81-a219-3085d4f87052): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:49.604346 kubelet[3160]: E0124 00:45:49.604198 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:45:50.299604 containerd[1714]: time="2026-01-24T00:45:50.298638633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:45:50.570598 containerd[1714]: time="2026-01-24T00:45:50.570546891Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:50.573480 containerd[1714]: time="2026-01-24T00:45:50.573389356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:45:50.573480 containerd[1714]: time="2026-01-24T00:45:50.573432457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:45:50.573682 kubelet[3160]: E0124 00:45:50.573642 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:50.573757 kubelet[3160]: E0124 00:45:50.573694 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:45:50.573881 kubelet[3160]: E0124 00:45:50.573790 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-65cb7dc6d6-nfm24_calico-system(0dadebec-93b1-44bd-9cc0-05be5a1a434d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:50.576123 containerd[1714]: time="2026-01-24T00:45:50.576062316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:45:50.840270 containerd[1714]: time="2026-01-24T00:45:50.840101596Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:50.843387 containerd[1714]: time="2026-01-24T00:45:50.843306568Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:45:50.843657 containerd[1714]: time="2026-01-24T00:45:50.843347669Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:45:50.843747 kubelet[3160]: E0124 00:45:50.843619 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:50.843747 kubelet[3160]: E0124 00:45:50.843678 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:45:50.844731 kubelet[3160]: E0124 00:45:50.843816 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-65cb7dc6d6-nfm24_calico-system(0dadebec-93b1-44bd-9cc0-05be5a1a434d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:50.844731 kubelet[3160]: E0124 00:45:50.843875 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:45:53.297464 containerd[1714]: time="2026-01-24T00:45:53.297410146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:53.565100 containerd[1714]: time="2026-01-24T00:45:53.565054307Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:53.568535 containerd[1714]: time="2026-01-24T00:45:53.568474985Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:53.568665 containerd[1714]: time="2026-01-24T00:45:53.568577187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:45:53.568898 kubelet[3160]: E0124 00:45:53.568859 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:53.569313 kubelet[3160]: E0124 00:45:53.568909 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:53.569313 kubelet[3160]: E0124 00:45:53.569016 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64999767c9-w9j7d_calico-apiserver(3b4d50cd-bfa9-4817-b2aa-6df460bb529b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:53.569313 kubelet[3160]: E0124 00:45:53.569064 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:45:54.299568 containerd[1714]: time="2026-01-24T00:45:54.299369837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:45:54.562869 containerd[1714]: time="2026-01-24T00:45:54.562820203Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:54.565914 containerd[1714]: time="2026-01-24T00:45:54.565798271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:45:54.565914 containerd[1714]: time="2026-01-24T00:45:54.565861172Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:45:54.566126 kubelet[3160]: E0124 00:45:54.566067 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:45:54.566199 kubelet[3160]: E0124 00:45:54.566132 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:45:54.566270 kubelet[3160]: E0124 00:45:54.566236 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-msr6b_calico-system(6289d75a-fb3d-4a7e-b426-fb74d3f97fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:54.568073 containerd[1714]: time="2026-01-24T00:45:54.567793616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:45:54.833521 containerd[1714]: time="2026-01-24T00:45:54.833394331Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:54.836522 containerd[1714]: time="2026-01-24T00:45:54.836473601Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:45:54.836675 containerd[1714]: time="2026-01-24T00:45:54.836500801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:45:54.836792 kubelet[3160]: E0124 00:45:54.836747 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:45:54.837197 kubelet[3160]: E0124 00:45:54.836807 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:45:54.837197 kubelet[3160]: E0124 00:45:54.836907 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-msr6b_calico-system(6289d75a-fb3d-4a7e-b426-fb74d3f97fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:54.837197 kubelet[3160]: E0124 00:45:54.836965 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:45:55.297254 containerd[1714]: time="2026-01-24T00:45:55.297023397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:45:55.565142 containerd[1714]: time="2026-01-24T00:45:55.565092795Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:55.567796 containerd[1714]: time="2026-01-24T00:45:55.567675056Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:45:55.567796 containerd[1714]: time="2026-01-24T00:45:55.567714157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:45:55.567987 kubelet[3160]: E0124 00:45:55.567949 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:45:55.568085 kubelet[3160]: E0124 00:45:55.567997 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:45:55.568134 kubelet[3160]: E0124 00:45:55.568094 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5598cf5ccb-2mj7w_calico-system(ea8e1ae1-59b4-45f9-9265-2981e79d3abb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:55.568258 kubelet[3160]: E0124 00:45:55.568138 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:45:57.297310 containerd[1714]: time="2026-01-24T00:45:57.297265991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:45:57.562654 containerd[1714]: time="2026-01-24T00:45:57.562603625Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:45:57.565270 containerd[1714]: time="2026-01-24T00:45:57.565172585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:45:57.565270 containerd[1714]: time="2026-01-24T00:45:57.565221386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:45:57.565490 kubelet[3160]: E0124 00:45:57.565415 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:57.565490 kubelet[3160]: E0124 00:45:57.565467 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:45:57.565948 kubelet[3160]: E0124 00:45:57.565561 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64999767c9-nk8rp_calico-apiserver(7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:45:57.565948 kubelet[3160]: E0124 00:45:57.565613 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:46:02.300184 kubelet[3160]: E0124 00:46:02.299842 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:46:02.303372 kubelet[3160]: E0124 00:46:02.302165 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:46:05.078653 systemd[1]: run-containerd-runc-k8s.io-0bcebb107ea8acf2c51aaf5d3134ad4b0e00c7935ae28ff2b7ca7e4ed8ddedcf-runc.rJgoz3.mount: Deactivated successfully. Jan 24 00:46:05.297998 kubelet[3160]: E0124 00:46:05.297946 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:46:07.297577 kubelet[3160]: E0124 00:46:07.297182 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:46:09.297428 kubelet[3160]: E0124 00:46:09.297375 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:46:10.299615 kubelet[3160]: E0124 00:46:10.299508 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:46:16.303037 containerd[1714]: time="2026-01-24T00:46:16.302991143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:46:16.581010 containerd[1714]: time="2026-01-24T00:46:16.580805980Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:16.584471 containerd[1714]: time="2026-01-24T00:46:16.584245458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:46:16.584471 containerd[1714]: time="2026-01-24T00:46:16.584368661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:46:16.584712 kubelet[3160]: E0124 00:46:16.584660 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:46:16.585805 kubelet[3160]: E0124 00:46:16.584711 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:46:16.585805 kubelet[3160]: E0124 00:46:16.584811 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wr4rk_calico-system(60f29bc1-01eb-4e81-a219-3085d4f87052): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:16.585805 kubelet[3160]: E0124 00:46:16.584850 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:46:17.301467 containerd[1714]: time="2026-01-24T00:46:17.301396417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:46:17.570350 containerd[1714]: time="2026-01-24T00:46:17.569353630Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:17.572556 containerd[1714]: time="2026-01-24T00:46:17.572502401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:46:17.572666 containerd[1714]: time="2026-01-24T00:46:17.572610404Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:46:17.573489 kubelet[3160]: E0124 00:46:17.572839 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:46:17.573489 kubelet[3160]: E0124 00:46:17.572899 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:46:17.573489 kubelet[3160]: E0124 00:46:17.572987 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-65cb7dc6d6-nfm24_calico-system(0dadebec-93b1-44bd-9cc0-05be5a1a434d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:17.574321 containerd[1714]: time="2026-01-24T00:46:17.574280042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:46:17.843367 containerd[1714]: time="2026-01-24T00:46:17.843133475Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:17.846728 containerd[1714]: time="2026-01-24T00:46:17.846265446Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:46:17.846728 containerd[1714]: time="2026-01-24T00:46:17.846375449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:46:17.847009 kubelet[3160]: E0124 00:46:17.846965 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:46:17.847492 kubelet[3160]: E0124 00:46:17.847019 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:46:17.847492 kubelet[3160]: E0124 00:46:17.847139 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-65cb7dc6d6-nfm24_calico-system(0dadebec-93b1-44bd-9cc0-05be5a1a434d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:17.847854 kubelet[3160]: E0124 00:46:17.847801 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:46:20.302343 containerd[1714]: time="2026-01-24T00:46:20.302077314Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:46:20.581186 containerd[1714]: time="2026-01-24T00:46:20.581124965Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:20.585015 containerd[1714]: time="2026-01-24T00:46:20.584862750Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:46:20.585015 containerd[1714]: time="2026-01-24T00:46:20.584962353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:46:20.585858 kubelet[3160]: E0124 00:46:20.585340 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:46:20.585858 kubelet[3160]: E0124 00:46:20.585397 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:46:20.585858 kubelet[3160]: E0124 00:46:20.585491 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-msr6b_calico-system(6289d75a-fb3d-4a7e-b426-fb74d3f97fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:20.587152 containerd[1714]: time="2026-01-24T00:46:20.587130202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:46:20.858649 containerd[1714]: time="2026-01-24T00:46:20.858490378Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:20.862772 containerd[1714]: time="2026-01-24T00:46:20.862715175Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:46:20.863007 containerd[1714]: time="2026-01-24T00:46:20.862842478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:46:20.863120 kubelet[3160]: E0124 00:46:20.863058 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:46:20.863241 kubelet[3160]: E0124 00:46:20.863133 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:46:20.863304 kubelet[3160]: E0124 00:46:20.863238 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-msr6b_calico-system(6289d75a-fb3d-4a7e-b426-fb74d3f97fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:20.863448 kubelet[3160]: E0124 00:46:20.863365 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:46:22.298961 containerd[1714]: time="2026-01-24T00:46:22.298673959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:46:22.419051 waagent[1904]: 2026-01-24T00:46:22.418052Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 24 00:46:22.427865 waagent[1904]: 2026-01-24T00:46:22.427807Z INFO ExtHandler Jan 24 00:46:22.428003 waagent[1904]: 2026-01-24T00:46:22.427938Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 3ed0d7bd-9d5f-4804-9b3e-2735f8343d99 eTag: 16784503998556944888 source: Fabric] Jan 24 00:46:22.428355 waagent[1904]: 2026-01-24T00:46:22.428282Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 24 00:46:22.428952 waagent[1904]: 2026-01-24T00:46:22.428895Z INFO ExtHandler Jan 24 00:46:22.429033 waagent[1904]: 2026-01-24T00:46:22.428982Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 24 00:46:22.510341 waagent[1904]: 2026-01-24T00:46:22.510287Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 24 00:46:22.564064 containerd[1714]: time="2026-01-24T00:46:22.564017099Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:22.569228 containerd[1714]: time="2026-01-24T00:46:22.569122115Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:46:22.569228 containerd[1714]: time="2026-01-24T00:46:22.569180916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:46:22.569430 kubelet[3160]: E0124 00:46:22.569388 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:22.569792 kubelet[3160]: E0124 00:46:22.569445 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:22.569792 kubelet[3160]: E0124 00:46:22.569674 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64999767c9-w9j7d_calico-apiserver(3b4d50cd-bfa9-4817-b2aa-6df460bb529b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:22.569792 kubelet[3160]: E0124 00:46:22.569722 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:46:22.570366 containerd[1714]: time="2026-01-24T00:46:22.570322442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:46:22.676377 waagent[1904]: 2026-01-24T00:46:22.674556Z INFO ExtHandler Downloaded certificate {'thumbprint': '6B068E7114567446D724D2574B0BBA050758371A', 'hasPrivateKey': True} Jan 24 00:46:22.676377 waagent[1904]: 2026-01-24T00:46:22.675635Z INFO ExtHandler Fetch goal state completed Jan 24 00:46:22.676377 waagent[1904]: 2026-01-24T00:46:22.676264Z INFO ExtHandler ExtHandler Jan 24 00:46:22.676617 waagent[1904]: 2026-01-24T00:46:22.676398Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 650240f6-beb2-4041-98ad-9018254c651e correlation 386b1bf8-49dc-47e4-ab5f-7ba9edfec0c8 created: 2026-01-24T00:46:14.835190Z] Jan 24 00:46:22.676887 waagent[1904]: 2026-01-24T00:46:22.676826Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 24 00:46:22.677636 waagent[1904]: 2026-01-24T00:46:22.677583Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Jan 24 00:46:22.833905 containerd[1714]: time="2026-01-24T00:46:22.833766739Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:22.838225 containerd[1714]: time="2026-01-24T00:46:22.837929233Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:46:22.838225 containerd[1714]: time="2026-01-24T00:46:22.838015035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:46:22.838586 kubelet[3160]: E0124 00:46:22.838534 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:22.838646 kubelet[3160]: E0124 00:46:22.838586 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:46:22.838701 kubelet[3160]: E0124 00:46:22.838681 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64999767c9-nk8rp_calico-apiserver(7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:22.838746 kubelet[3160]: E0124 00:46:22.838722 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:46:23.299564 containerd[1714]: time="2026-01-24T00:46:23.299434738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:46:23.588920 containerd[1714]: time="2026-01-24T00:46:23.587907304Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:46:23.590729 containerd[1714]: time="2026-01-24T00:46:23.590536764Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:46:23.590729 containerd[1714]: time="2026-01-24T00:46:23.590681367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:46:23.592035 kubelet[3160]: E0124 00:46:23.591390 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:46:23.592035 kubelet[3160]: E0124 00:46:23.591451 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:46:23.592035 kubelet[3160]: E0124 00:46:23.591547 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5598cf5ccb-2mj7w_calico-system(ea8e1ae1-59b4-45f9-9265-2981e79d3abb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:46:23.592035 kubelet[3160]: E0124 00:46:23.591585 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:46:28.299613 kubelet[3160]: E0124 00:46:28.299554 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:46:29.298687 kubelet[3160]: E0124 00:46:29.298532 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:46:34.301958 kubelet[3160]: E0124 00:46:34.301540 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:46:35.298433 kubelet[3160]: E0124 00:46:35.298245 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:46:35.299268 kubelet[3160]: E0124 00:46:35.299018 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:46:38.305524 kubelet[3160]: E0124 00:46:38.304633 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:46:43.296898 kubelet[3160]: E0124 00:46:43.296767 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:46:44.303846 kubelet[3160]: E0124 00:46:44.303799 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:46:45.297198 kubelet[3160]: E0124 00:46:45.297134 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:46:46.301113 kubelet[3160]: E0124 00:46:46.301052 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:46:47.297162 kubelet[3160]: E0124 00:46:47.296750 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:46:51.297136 kubelet[3160]: E0124 00:46:51.297087 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:46:55.298201 kubelet[3160]: E0124 00:46:55.298144 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:46:56.302397 kubelet[3160]: E0124 00:46:56.300667 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:46:58.298066 kubelet[3160]: E0124 00:46:58.297497 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:46:59.297963 kubelet[3160]: E0124 00:46:59.297803 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:47:01.297416 containerd[1714]: time="2026-01-24T00:47:01.297120559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 24 00:47:01.569630 containerd[1714]: time="2026-01-24T00:47:01.569512217Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:01.573773 containerd[1714]: time="2026-01-24T00:47:01.573715412Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 24 00:47:01.573979 containerd[1714]: time="2026-01-24T00:47:01.573810415Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 24 00:47:01.574089 kubelet[3160]: E0124 00:47:01.574016 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:47:01.574722 kubelet[3160]: E0124 00:47:01.574085 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 24 00:47:01.574722 kubelet[3160]: E0124 00:47:01.574185 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-msr6b_calico-system(6289d75a-fb3d-4a7e-b426-fb74d3f97fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:01.575706 containerd[1714]: time="2026-01-24T00:47:01.575511453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 24 00:47:01.837431 containerd[1714]: time="2026-01-24T00:47:01.836424752Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:01.841389 containerd[1714]: time="2026-01-24T00:47:01.841309262Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 24 00:47:01.841510 containerd[1714]: time="2026-01-24T00:47:01.841401264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 24 00:47:01.841866 kubelet[3160]: E0124 00:47:01.841711 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:47:01.841866 kubelet[3160]: E0124 00:47:01.841789 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 24 00:47:01.842455 kubelet[3160]: E0124 00:47:01.842064 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-msr6b_calico-system(6289d75a-fb3d-4a7e-b426-fb74d3f97fd2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:01.842455 kubelet[3160]: E0124 00:47:01.842400 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:47:02.302516 kubelet[3160]: E0124 00:47:02.301074 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:47:04.871578 systemd[1]: Started sshd@7-10.200.4.34:22-10.200.16.10:35522.service - OpenSSH per-connection server daemon (10.200.16.10:35522). Jan 24 00:47:05.492511 sshd[5879]: Accepted publickey for core from 10.200.16.10 port 35522 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:05.494207 sshd[5879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:05.502071 systemd-logind[1697]: New session 10 of user core. Jan 24 00:47:05.510512 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 24 00:47:06.046092 sshd[5879]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:06.053298 systemd[1]: sshd@7-10.200.4.34:22-10.200.16.10:35522.service: Deactivated successfully. Jan 24 00:47:06.053368 systemd-logind[1697]: Session 10 logged out. Waiting for processes to exit. Jan 24 00:47:06.059451 systemd[1]: session-10.scope: Deactivated successfully. Jan 24 00:47:06.062872 systemd-logind[1697]: Removed session 10. Jan 24 00:47:06.299245 containerd[1714]: time="2026-01-24T00:47:06.299125440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 24 00:47:06.572466 containerd[1714]: time="2026-01-24T00:47:06.572064410Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:06.575290 containerd[1714]: time="2026-01-24T00:47:06.575232182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 24 00:47:06.575423 containerd[1714]: time="2026-01-24T00:47:06.575350585Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 24 00:47:06.575585 kubelet[3160]: E0124 00:47:06.575544 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:47:06.575976 kubelet[3160]: E0124 00:47:06.575601 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 24 00:47:06.575976 kubelet[3160]: E0124 00:47:06.575691 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-65cb7dc6d6-nfm24_calico-system(0dadebec-93b1-44bd-9cc0-05be5a1a434d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:06.577121 containerd[1714]: time="2026-01-24T00:47:06.577088724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 24 00:47:06.837750 containerd[1714]: time="2026-01-24T00:47:06.837448110Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:06.841967 containerd[1714]: time="2026-01-24T00:47:06.840612081Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 24 00:47:06.841967 containerd[1714]: time="2026-01-24T00:47:06.840700083Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 24 00:47:06.842155 kubelet[3160]: E0124 00:47:06.840888 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:47:06.842155 kubelet[3160]: E0124 00:47:06.840932 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 24 00:47:06.842155 kubelet[3160]: E0124 00:47:06.841019 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-65cb7dc6d6-nfm24_calico-system(0dadebec-93b1-44bd-9cc0-05be5a1a434d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:06.842305 kubelet[3160]: E0124 00:47:06.841072 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:47:09.297415 containerd[1714]: time="2026-01-24T00:47:09.297225893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:47:09.568822 containerd[1714]: time="2026-01-24T00:47:09.568773160Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:09.571317 containerd[1714]: time="2026-01-24T00:47:09.571269516Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:47:09.571454 containerd[1714]: time="2026-01-24T00:47:09.571304617Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:47:09.571613 kubelet[3160]: E0124 00:47:09.571568 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:47:09.572179 kubelet[3160]: E0124 00:47:09.571623 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:47:09.572179 kubelet[3160]: E0124 00:47:09.571719 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64999767c9-w9j7d_calico-apiserver(3b4d50cd-bfa9-4817-b2aa-6df460bb529b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:09.572179 kubelet[3160]: E0124 00:47:09.571772 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:47:11.155482 systemd[1]: Started sshd@8-10.200.4.34:22-10.200.16.10:36064.service - OpenSSH per-connection server daemon (10.200.16.10:36064). Jan 24 00:47:11.298753 containerd[1714]: time="2026-01-24T00:47:11.298710312Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 24 00:47:11.577003 containerd[1714]: time="2026-01-24T00:47:11.576826425Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:11.579576 containerd[1714]: time="2026-01-24T00:47:11.579417183Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 24 00:47:11.579576 containerd[1714]: time="2026-01-24T00:47:11.579515685Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 24 00:47:11.580134 kubelet[3160]: E0124 00:47:11.579927 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:47:11.580134 kubelet[3160]: E0124 00:47:11.579995 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 24 00:47:11.581133 kubelet[3160]: E0124 00:47:11.581022 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-wr4rk_calico-system(60f29bc1-01eb-4e81-a219-3085d4f87052): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:11.581133 kubelet[3160]: E0124 00:47:11.581074 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:47:11.771410 sshd[5927]: Accepted publickey for core from 10.200.16.10 port 36064 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:11.773782 sshd[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:11.780825 systemd-logind[1697]: New session 11 of user core. Jan 24 00:47:11.786618 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 24 00:47:12.300370 sshd[5927]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:12.305148 systemd[1]: sshd@8-10.200.4.34:22-10.200.16.10:36064.service: Deactivated successfully. Jan 24 00:47:12.309150 systemd[1]: session-11.scope: Deactivated successfully. Jan 24 00:47:12.311026 systemd-logind[1697]: Session 11 logged out. Waiting for processes to exit. Jan 24 00:47:12.313806 systemd-logind[1697]: Removed session 11. Jan 24 00:47:13.298252 containerd[1714]: time="2026-01-24T00:47:13.298197085Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 24 00:47:13.567343 containerd[1714]: time="2026-01-24T00:47:13.567259097Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:13.571458 containerd[1714]: time="2026-01-24T00:47:13.571251286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 24 00:47:13.571458 containerd[1714]: time="2026-01-24T00:47:13.571356588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 24 00:47:13.572353 kubelet[3160]: E0124 00:47:13.571759 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:47:13.572353 kubelet[3160]: E0124 00:47:13.571813 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 24 00:47:13.572353 kubelet[3160]: E0124 00:47:13.571902 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-64999767c9-nk8rp_calico-apiserver(7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:13.572353 kubelet[3160]: E0124 00:47:13.571941 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:47:14.299993 containerd[1714]: time="2026-01-24T00:47:14.299871065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 24 00:47:14.571455 containerd[1714]: time="2026-01-24T00:47:14.571405832Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 24 00:47:14.574039 containerd[1714]: time="2026-01-24T00:47:14.573983390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 24 00:47:14.574147 containerd[1714]: time="2026-01-24T00:47:14.574088192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 24 00:47:14.574376 kubelet[3160]: E0124 00:47:14.574318 3160 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:47:14.574766 kubelet[3160]: E0124 00:47:14.574390 3160 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 24 00:47:14.574766 kubelet[3160]: E0124 00:47:14.574484 3160 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-5598cf5ccb-2mj7w_calico-system(ea8e1ae1-59b4-45f9-9265-2981e79d3abb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 24 00:47:14.574766 kubelet[3160]: E0124 00:47:14.574537 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:47:16.304352 kubelet[3160]: E0124 00:47:16.302322 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:47:17.416969 systemd[1]: Started sshd@9-10.200.4.34:22-10.200.16.10:36076.service - OpenSSH per-connection server daemon (10.200.16.10:36076). Jan 24 00:47:18.019056 sshd[5948]: Accepted publickey for core from 10.200.16.10 port 36076 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:18.021652 sshd[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:18.027087 systemd-logind[1697]: New session 12 of user core. Jan 24 00:47:18.034504 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 24 00:47:18.587007 sshd[5948]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:18.590926 systemd[1]: sshd@9-10.200.4.34:22-10.200.16.10:36076.service: Deactivated successfully. Jan 24 00:47:18.595531 systemd[1]: session-12.scope: Deactivated successfully. Jan 24 00:47:18.598443 systemd-logind[1697]: Session 12 logged out. Waiting for processes to exit. Jan 24 00:47:18.600119 systemd-logind[1697]: Removed session 12. Jan 24 00:47:18.700747 systemd[1]: Started sshd@10-10.200.4.34:22-10.200.16.10:36086.service - OpenSSH per-connection server daemon (10.200.16.10:36086). Jan 24 00:47:19.307306 sshd[5962]: Accepted publickey for core from 10.200.16.10 port 36086 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:19.310054 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:19.314776 systemd-logind[1697]: New session 13 of user core. Jan 24 00:47:19.319492 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 24 00:47:19.876606 sshd[5962]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:19.881963 systemd[1]: sshd@10-10.200.4.34:22-10.200.16.10:36086.service: Deactivated successfully. Jan 24 00:47:19.887182 systemd[1]: session-13.scope: Deactivated successfully. Jan 24 00:47:19.888709 systemd-logind[1697]: Session 13 logged out. Waiting for processes to exit. Jan 24 00:47:19.890162 systemd-logind[1697]: Removed session 13. Jan 24 00:47:19.988662 systemd[1]: Started sshd@11-10.200.4.34:22-10.200.16.10:35474.service - OpenSSH per-connection server daemon (10.200.16.10:35474). Jan 24 00:47:20.593380 sshd[5973]: Accepted publickey for core from 10.200.16.10 port 35474 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:20.594994 sshd[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:20.602143 systemd-logind[1697]: New session 14 of user core. Jan 24 00:47:20.608683 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 24 00:47:21.120875 sshd[5973]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:21.127125 systemd[1]: sshd@11-10.200.4.34:22-10.200.16.10:35474.service: Deactivated successfully. Jan 24 00:47:21.127750 systemd-logind[1697]: Session 14 logged out. Waiting for processes to exit. Jan 24 00:47:21.132731 systemd[1]: session-14.scope: Deactivated successfully. Jan 24 00:47:21.138879 systemd-logind[1697]: Removed session 14. Jan 24 00:47:22.301286 kubelet[3160]: E0124 00:47:22.300382 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:47:24.300106 kubelet[3160]: E0124 00:47:24.300041 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:47:24.302721 kubelet[3160]: E0124 00:47:24.302688 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:47:26.233475 systemd[1]: Started sshd@12-10.200.4.34:22-10.200.16.10:35476.service - OpenSSH per-connection server daemon (10.200.16.10:35476). Jan 24 00:47:26.845269 sshd[5992]: Accepted publickey for core from 10.200.16.10 port 35476 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:26.846840 sshd[5992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:26.851492 systemd-logind[1697]: New session 15 of user core. Jan 24 00:47:26.854696 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 24 00:47:27.298539 kubelet[3160]: E0124 00:47:27.297921 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:47:27.300820 kubelet[3160]: E0124 00:47:27.298903 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:47:27.333282 sshd[5992]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:27.337807 systemd[1]: sshd@12-10.200.4.34:22-10.200.16.10:35476.service: Deactivated successfully. Jan 24 00:47:27.342302 systemd[1]: session-15.scope: Deactivated successfully. Jan 24 00:47:27.343130 systemd-logind[1697]: Session 15 logged out. Waiting for processes to exit. Jan 24 00:47:27.344134 systemd-logind[1697]: Removed session 15. Jan 24 00:47:28.301972 kubelet[3160]: E0124 00:47:28.301914 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:47:32.447574 systemd[1]: Started sshd@13-10.200.4.34:22-10.200.16.10:55304.service - OpenSSH per-connection server daemon (10.200.16.10:55304). Jan 24 00:47:33.056354 sshd[6005]: Accepted publickey for core from 10.200.16.10 port 55304 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:33.055118 sshd[6005]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:33.063695 systemd-logind[1697]: New session 16 of user core. Jan 24 00:47:33.070672 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 24 00:47:33.298044 kubelet[3160]: E0124 00:47:33.297944 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:47:33.547630 sshd[6005]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:33.551750 systemd[1]: sshd@13-10.200.4.34:22-10.200.16.10:55304.service: Deactivated successfully. Jan 24 00:47:33.554423 systemd[1]: session-16.scope: Deactivated successfully. Jan 24 00:47:33.555637 systemd-logind[1697]: Session 16 logged out. Waiting for processes to exit. Jan 24 00:47:33.557215 systemd-logind[1697]: Removed session 16. Jan 24 00:47:36.300143 kubelet[3160]: E0124 00:47:36.299723 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:47:38.297834 kubelet[3160]: E0124 00:47:38.297773 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:47:38.661686 systemd[1]: Started sshd@14-10.200.4.34:22-10.200.16.10:55310.service - OpenSSH per-connection server daemon (10.200.16.10:55310). Jan 24 00:47:39.275429 sshd[6039]: Accepted publickey for core from 10.200.16.10 port 55310 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:39.277027 sshd[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:39.284094 systemd-logind[1697]: New session 17 of user core. Jan 24 00:47:39.287817 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 24 00:47:39.819420 sshd[6039]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:39.822343 systemd[1]: sshd@14-10.200.4.34:22-10.200.16.10:55310.service: Deactivated successfully. Jan 24 00:47:39.824648 systemd[1]: session-17.scope: Deactivated successfully. Jan 24 00:47:39.826646 systemd-logind[1697]: Session 17 logged out. Waiting for processes to exit. Jan 24 00:47:39.827863 systemd-logind[1697]: Removed session 17. Jan 24 00:47:39.926541 systemd[1]: Started sshd@15-10.200.4.34:22-10.200.16.10:45728.service - OpenSSH per-connection server daemon (10.200.16.10:45728). Jan 24 00:47:40.299581 kubelet[3160]: E0124 00:47:40.299441 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:47:40.539034 sshd[6052]: Accepted publickey for core from 10.200.16.10 port 45728 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:40.542204 sshd[6052]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:40.551710 systemd-logind[1697]: New session 18 of user core. Jan 24 00:47:40.556498 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 24 00:47:41.105816 sshd[6052]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:41.113596 systemd[1]: sshd@15-10.200.4.34:22-10.200.16.10:45728.service: Deactivated successfully. Jan 24 00:47:41.118073 systemd[1]: session-18.scope: Deactivated successfully. Jan 24 00:47:41.121915 systemd-logind[1697]: Session 18 logged out. Waiting for processes to exit. Jan 24 00:47:41.124704 systemd-logind[1697]: Removed session 18. Jan 24 00:47:41.223655 systemd[1]: Started sshd@16-10.200.4.34:22-10.200.16.10:45740.service - OpenSSH per-connection server daemon (10.200.16.10:45740). Jan 24 00:47:41.827976 sshd[6063]: Accepted publickey for core from 10.200.16.10 port 45740 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:41.830146 sshd[6063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:41.837395 systemd-logind[1697]: New session 19 of user core. Jan 24 00:47:41.842542 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 24 00:47:42.300071 kubelet[3160]: E0124 00:47:42.299743 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:47:42.300071 kubelet[3160]: E0124 00:47:42.300020 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:47:43.054266 sshd[6063]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:43.058392 systemd-logind[1697]: Session 19 logged out. Waiting for processes to exit. Jan 24 00:47:43.061644 systemd[1]: sshd@16-10.200.4.34:22-10.200.16.10:45740.service: Deactivated successfully. Jan 24 00:47:43.064610 systemd[1]: session-19.scope: Deactivated successfully. Jan 24 00:47:43.066755 systemd-logind[1697]: Removed session 19. Jan 24 00:47:43.171941 systemd[1]: Started sshd@17-10.200.4.34:22-10.200.16.10:45750.service - OpenSSH per-connection server daemon (10.200.16.10:45750). Jan 24 00:47:43.787252 sshd[6079]: Accepted publickey for core from 10.200.16.10 port 45750 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:43.788754 sshd[6079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:43.794374 systemd-logind[1697]: New session 20 of user core. Jan 24 00:47:43.798499 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 24 00:47:44.492597 sshd[6079]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:44.496365 systemd-logind[1697]: Session 20 logged out. Waiting for processes to exit. Jan 24 00:47:44.497136 systemd[1]: sshd@17-10.200.4.34:22-10.200.16.10:45750.service: Deactivated successfully. Jan 24 00:47:44.499569 systemd[1]: session-20.scope: Deactivated successfully. Jan 24 00:47:44.500670 systemd-logind[1697]: Removed session 20. Jan 24 00:47:44.604098 systemd[1]: Started sshd@18-10.200.4.34:22-10.200.16.10:45760.service - OpenSSH per-connection server daemon (10.200.16.10:45760). Jan 24 00:47:45.210637 sshd[6092]: Accepted publickey for core from 10.200.16.10 port 45760 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:45.212722 sshd[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:45.217882 systemd-logind[1697]: New session 21 of user core. Jan 24 00:47:45.231513 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 24 00:47:45.757903 sshd[6092]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:45.762869 systemd[1]: sshd@18-10.200.4.34:22-10.200.16.10:45760.service: Deactivated successfully. Jan 24 00:47:45.768037 systemd[1]: session-21.scope: Deactivated successfully. Jan 24 00:47:45.769150 systemd-logind[1697]: Session 21 logged out. Waiting for processes to exit. Jan 24 00:47:45.770972 systemd-logind[1697]: Removed session 21. Jan 24 00:47:47.297629 kubelet[3160]: E0124 00:47:47.297573 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:47:47.300811 kubelet[3160]: E0124 00:47:47.300760 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:47:49.296618 kubelet[3160]: E0124 00:47:49.296560 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:47:50.871643 systemd[1]: Started sshd@19-10.200.4.34:22-10.200.16.10:44916.service - OpenSSH per-connection server daemon (10.200.16.10:44916). Jan 24 00:47:51.481094 sshd[6109]: Accepted publickey for core from 10.200.16.10 port 44916 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:51.484220 sshd[6109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:51.491054 systemd-logind[1697]: New session 22 of user core. Jan 24 00:47:51.497524 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 24 00:47:52.036643 sshd[6109]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:52.042639 systemd[1]: sshd@19-10.200.4.34:22-10.200.16.10:44916.service: Deactivated successfully. Jan 24 00:47:52.046376 systemd[1]: session-22.scope: Deactivated successfully. Jan 24 00:47:52.047416 systemd-logind[1697]: Session 22 logged out. Waiting for processes to exit. Jan 24 00:47:52.048929 systemd-logind[1697]: Removed session 22. Jan 24 00:47:52.301533 kubelet[3160]: E0124 00:47:52.301241 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:47:53.298917 kubelet[3160]: E0124 00:47:53.298842 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:47:56.299032 kubelet[3160]: E0124 00:47:56.298929 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:47:57.148620 systemd[1]: Started sshd@20-10.200.4.34:22-10.200.16.10:44918.service - OpenSSH per-connection server daemon (10.200.16.10:44918). Jan 24 00:47:57.764181 sshd[6124]: Accepted publickey for core from 10.200.16.10 port 44918 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:47:57.767355 sshd[6124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:47:57.775555 systemd-logind[1697]: New session 23 of user core. Jan 24 00:47:57.782906 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 24 00:47:58.288540 sshd[6124]: pam_unix(sshd:session): session closed for user core Jan 24 00:47:58.292153 systemd-logind[1697]: Session 23 logged out. Waiting for processes to exit. Jan 24 00:47:58.294889 systemd[1]: sshd@20-10.200.4.34:22-10.200.16.10:44918.service: Deactivated successfully. Jan 24 00:47:58.301313 systemd[1]: session-23.scope: Deactivated successfully. Jan 24 00:47:58.304573 systemd-logind[1697]: Removed session 23. Jan 24 00:48:02.298008 kubelet[3160]: E0124 00:48:02.297615 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:48:02.299104 kubelet[3160]: E0124 00:48:02.298829 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:48:03.405573 systemd[1]: Started sshd@21-10.200.4.34:22-10.200.16.10:54630.service - OpenSSH per-connection server daemon (10.200.16.10:54630). Jan 24 00:48:04.007904 sshd[6137]: Accepted publickey for core from 10.200.16.10 port 54630 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:04.009618 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:04.014174 systemd-logind[1697]: New session 24 of user core. Jan 24 00:48:04.018627 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 24 00:48:04.301413 kubelet[3160]: E0124 00:48:04.300632 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:48:04.304192 kubelet[3160]: E0124 00:48:04.304138 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2" Jan 24 00:48:04.565608 sshd[6137]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:04.570286 systemd-logind[1697]: Session 24 logged out. Waiting for processes to exit. Jan 24 00:48:04.571199 systemd[1]: sshd@21-10.200.4.34:22-10.200.16.10:54630.service: Deactivated successfully. Jan 24 00:48:04.576232 systemd[1]: session-24.scope: Deactivated successfully. Jan 24 00:48:04.579730 systemd-logind[1697]: Removed session 24. Jan 24 00:48:06.301495 kubelet[3160]: E0124 00:48:06.300857 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:48:09.298169 kubelet[3160]: E0124 00:48:09.298111 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-nk8rp" podUID="7b33a64f-b7f5-40bf-8d4e-99b72fa6bbe9" Jan 24 00:48:09.684437 systemd[1]: Started sshd@22-10.200.4.34:22-10.200.16.10:59422.service - OpenSSH per-connection server daemon (10.200.16.10:59422). Jan 24 00:48:10.303136 sshd[6171]: Accepted publickey for core from 10.200.16.10 port 59422 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:10.305548 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:10.312685 systemd-logind[1697]: New session 25 of user core. Jan 24 00:48:10.319532 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 24 00:48:10.843273 sshd[6171]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:10.850204 systemd-logind[1697]: Session 25 logged out. Waiting for processes to exit. Jan 24 00:48:10.851266 systemd[1]: sshd@22-10.200.4.34:22-10.200.16.10:59422.service: Deactivated successfully. Jan 24 00:48:10.856319 systemd[1]: session-25.scope: Deactivated successfully. Jan 24 00:48:10.861135 systemd-logind[1697]: Removed session 25. Jan 24 00:48:13.297015 kubelet[3160]: E0124 00:48:13.296649 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-64999767c9-w9j7d" podUID="3b4d50cd-bfa9-4817-b2aa-6df460bb529b" Jan 24 00:48:13.298204 kubelet[3160]: E0124 00:48:13.298062 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-65cb7dc6d6-nfm24" podUID="0dadebec-93b1-44bd-9cc0-05be5a1a434d" Jan 24 00:48:15.297469 kubelet[3160]: E0124 00:48:15.296886 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-wr4rk" podUID="60f29bc1-01eb-4e81-a219-3085d4f87052" Jan 24 00:48:15.960622 systemd[1]: Started sshd@23-10.200.4.34:22-10.200.16.10:59430.service - OpenSSH per-connection server daemon (10.200.16.10:59430). Jan 24 00:48:16.573782 sshd[6184]: Accepted publickey for core from 10.200.16.10 port 59430 ssh2: RSA SHA256:tUm71BhrWRzw84Y3CRTnQTdnkHFX3WxKBo/QtRaWGdg Jan 24 00:48:16.576415 sshd[6184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 24 00:48:16.581463 systemd-logind[1697]: New session 26 of user core. Jan 24 00:48:16.585495 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 24 00:48:17.104010 sshd[6184]: pam_unix(sshd:session): session closed for user core Jan 24 00:48:17.111513 systemd-logind[1697]: Session 26 logged out. Waiting for processes to exit. Jan 24 00:48:17.112675 systemd[1]: sshd@23-10.200.4.34:22-10.200.16.10:59430.service: Deactivated successfully. Jan 24 00:48:17.116188 systemd[1]: session-26.scope: Deactivated successfully. Jan 24 00:48:17.118888 systemd-logind[1697]: Removed session 26. Jan 24 00:48:17.297612 kubelet[3160]: E0124 00:48:17.297562 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5598cf5ccb-2mj7w" podUID="ea8e1ae1-59b4-45f9-9265-2981e79d3abb" Jan 24 00:48:19.299305 kubelet[3160]: E0124 00:48:19.298827 3160 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-msr6b" podUID="6289d75a-fb3d-4a7e-b426-fb74d3f97fd2"