Jan 17 00:26:46.107482 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:26:46.107511 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:26:46.107524 kernel: BIOS-provided physical RAM map: Jan 17 00:26:46.107533 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:26:46.107540 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 17 00:26:46.107546 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Jan 17 00:26:46.107558 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Jan 17 00:26:46.107564 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Jan 17 00:26:46.107575 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Jan 17 00:26:46.107583 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Jan 17 00:26:46.107590 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 17 00:26:46.107600 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 17 00:26:46.107606 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 17 00:26:46.107613 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 17 00:26:46.107627 kernel: printk: bootconsole [earlyser0] enabled Jan 17 00:26:46.107634 kernel: NX (Execute Disable) protection: active Jan 17 00:26:46.107646 kernel: APIC: Static calls initialized Jan 17 00:26:46.107653 kernel: efi: EFI v2.7 by Microsoft Jan 17 00:26:46.107662 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ee82698 Jan 17 00:26:46.107671 kernel: SMBIOS 3.1.0 present. Jan 17 00:26:46.107679 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 17 00:26:46.107690 kernel: Hypervisor detected: Microsoft Hyper-V Jan 17 00:26:46.107697 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 17 00:26:46.107721 kernel: Hyper-V: Host Build 10.0.26102.1145-1-0 Jan 17 00:26:46.107733 kernel: Hyper-V: Nested features: 0x1e0101 Jan 17 00:26:46.107742 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 17 00:26:46.107753 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 17 00:26:46.107761 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 00:26:46.107768 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 00:26:46.107780 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 17 00:26:46.107787 kernel: tsc: Detected 2593.907 MHz processor Jan 17 00:26:46.107798 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:26:46.107806 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:26:46.107814 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 17 00:26:46.107826 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:26:46.107834 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:26:46.107845 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 17 00:26:46.107852 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 17 00:26:46.107862 kernel: Using GB pages for direct mapping Jan 17 00:26:46.107871 kernel: Secure boot disabled Jan 17 00:26:46.107885 kernel: ACPI: Early table checksum verification disabled Jan 17 00:26:46.107896 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 17 00:26:46.107907 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107915 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107927 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:26:46.107934 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 17 00:26:46.107946 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107954 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107967 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107975 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107986 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107995 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.108002 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 17 00:26:46.108014 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 17 00:26:46.108022 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 17 00:26:46.108030 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 17 00:26:46.108042 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 17 00:26:46.108051 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 17 00:26:46.108063 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 17 00:26:46.108071 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Jan 17 00:26:46.108082 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 17 00:26:46.108091 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:26:46.108099 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:26:46.108110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 17 00:26:46.108118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 17 00:26:46.108130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 17 00:26:46.108139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 17 00:26:46.108148 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 17 00:26:46.108159 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 17 00:26:46.108167 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 17 00:26:46.108178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 17 00:26:46.108186 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 17 00:26:46.108196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 17 00:26:46.108206 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 17 00:26:46.108218 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 17 00:26:46.108228 kernel: Zone ranges: Jan 17 00:26:46.108235 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:26:46.108247 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:26:46.108257 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 00:26:46.108266 kernel: Movable zone start for each node Jan 17 00:26:46.108274 kernel: Early memory node ranges Jan 17 00:26:46.108285 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:26:46.108293 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Jan 17 00:26:46.108307 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Jan 17 00:26:46.108314 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 17 00:26:46.108326 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 00:26:46.108333 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 17 00:26:46.108344 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:26:46.108353 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:26:46.108360 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 17 00:26:46.108372 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 17 00:26:46.108380 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 17 00:26:46.108391 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 17 00:26:46.108401 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:26:46.108408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:26:46.108420 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:26:46.108428 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 17 00:26:46.108436 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:26:46.108447 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 17 00:26:46.108455 kernel: Booting paravirtualized kernel on Hyper-V Jan 17 00:26:46.108467 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:26:46.108477 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:26:46.108489 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:26:46.108496 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:26:46.108507 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:26:46.108515 kernel: Hyper-V: PV spinlocks enabled Jan 17 00:26:46.108523 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:26:46.108536 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:26:46.108544 kernel: random: crng init done Jan 17 00:26:46.108557 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:26:46.108565 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:26:46.108573 kernel: Fallback order for Node 0: 0 Jan 17 00:26:46.108584 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Jan 17 00:26:46.108592 kernel: Policy zone: Normal Jan 17 00:26:46.108604 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:26:46.108612 kernel: software IO TLB: area num 2. Jan 17 00:26:46.108623 kernel: Memory: 8056460K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 326508K reserved, 0K cma-reserved) Jan 17 00:26:46.108632 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:26:46.108652 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:26:46.108665 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:26:46.108673 kernel: Dynamic Preempt: voluntary Jan 17 00:26:46.108687 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:26:46.108699 kernel: rcu: RCU event tracing is enabled. Jan 17 00:26:46.108717 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:26:46.108729 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:26:46.108738 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:26:46.108750 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:26:46.108761 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:26:46.108774 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:26:46.108785 kernel: Using NULL legacy PIC Jan 17 00:26:46.108795 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 17 00:26:46.108806 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:26:46.108815 kernel: Console: colour dummy device 80x25 Jan 17 00:26:46.108828 kernel: printk: console [tty1] enabled Jan 17 00:26:46.108856 kernel: printk: console [ttyS0] enabled Jan 17 00:26:46.108885 kernel: printk: bootconsole [earlyser0] disabled Jan 17 00:26:46.108905 kernel: ACPI: Core revision 20230628 Jan 17 00:26:46.108924 kernel: Failed to register legacy timer interrupt Jan 17 00:26:46.108942 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:26:46.108958 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:26:46.108977 kernel: Hyper-V: Using IPI hypercalls Jan 17 00:26:46.108993 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 17 00:26:46.109009 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 17 00:26:46.109025 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 17 00:26:46.109047 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 17 00:26:46.109063 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 17 00:26:46.109081 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 17 00:26:46.109099 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 17 00:26:46.109118 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 00:26:46.109136 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 17 00:26:46.109153 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:26:46.109168 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:26:46.109183 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:26:46.109215 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 00:26:46.109235 kernel: RETBleed: Vulnerable Jan 17 00:26:46.109250 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:26:46.109265 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:26:46.109282 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:26:46.109299 kernel: active return thunk: its_return_thunk Jan 17 00:26:46.109314 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:26:46.109331 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:26:46.109350 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:26:46.109369 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:26:46.109384 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:26:46.109408 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:26:46.109423 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:26:46.109443 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:26:46.109462 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 17 00:26:46.109481 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 17 00:26:46.109499 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 17 00:26:46.109516 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 17 00:26:46.109533 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:26:46.109548 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:26:46.109563 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:26:46.109578 kernel: landlock: Up and running. Jan 17 00:26:46.109591 kernel: SELinux: Initializing. Jan 17 00:26:46.109613 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:26:46.109627 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:26:46.109641 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 00:26:46.109653 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:26:46.109668 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:26:46.109681 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:26:46.109696 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 00:26:46.109726 kernel: signal: max sigframe size: 3632 Jan 17 00:26:46.109741 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:26:46.109759 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:26:46.109774 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:26:46.109789 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:26:46.109804 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:26:46.109819 kernel: .... node #0, CPUs: #1 Jan 17 00:26:46.109834 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 17 00:26:46.109851 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:26:46.109865 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:26:46.109880 kernel: smpboot: Max logical packages: 1 Jan 17 00:26:46.109899 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 17 00:26:46.109914 kernel: devtmpfs: initialized Jan 17 00:26:46.109929 kernel: x86/mm: Memory block size: 128MB Jan 17 00:26:46.109944 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 17 00:26:46.109959 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:26:46.109974 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:26:46.109989 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:26:46.110004 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:26:46.110019 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:26:46.110037 kernel: audit: type=2000 audit(1768609605.030:1): state=initialized audit_enabled=0 res=1 Jan 17 00:26:46.110051 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:26:46.110066 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:26:46.110081 kernel: cpuidle: using governor menu Jan 17 00:26:46.110096 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:26:46.110111 kernel: dca service started, version 1.12.1 Jan 17 00:26:46.110126 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Jan 17 00:26:46.110141 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 17 00:26:46.110156 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:26:46.110174 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:26:46.110189 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:26:46.110204 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:26:46.110219 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:26:46.110234 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:26:46.110250 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:26:46.110265 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:26:46.110280 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:26:46.110299 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:26:46.110313 kernel: ACPI: Interpreter enabled Jan 17 00:26:46.110329 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:26:46.110344 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:26:46.110359 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:26:46.110373 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:26:46.110388 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 17 00:26:46.110403 kernel: iommu: Default domain type: Translated Jan 17 00:26:46.110418 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:26:46.110433 kernel: efivars: Registered efivars operations Jan 17 00:26:46.110452 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:26:46.110467 kernel: PCI: System does not support PCI Jan 17 00:26:46.110482 kernel: vgaarb: loaded Jan 17 00:26:46.110497 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 17 00:26:46.110512 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:26:46.110527 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:26:46.110542 kernel: pnp: PnP ACPI init Jan 17 00:26:46.110558 kernel: pnp: PnP ACPI: found 3 devices Jan 17 00:26:46.110573 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:26:46.110590 kernel: NET: Registered PF_INET protocol family Jan 17 00:26:46.110605 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:26:46.110621 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:26:46.110636 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:26:46.110651 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:26:46.110667 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:26:46.110682 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:26:46.110697 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:26:46.110727 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:26:46.110744 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:26:46.110757 kernel: NET: Registered PF_XDP protocol family Jan 17 00:26:46.110774 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:26:46.110787 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:26:46.110799 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Jan 17 00:26:46.110812 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:26:46.110825 kernel: Initialise system trusted keyrings Jan 17 00:26:46.110840 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:26:46.110857 kernel: Key type asymmetric registered Jan 17 00:26:46.110870 kernel: Asymmetric key parser 'x509' registered Jan 17 00:26:46.110883 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:26:46.110896 kernel: io scheduler mq-deadline registered Jan 17 00:26:46.110911 kernel: io scheduler kyber registered Jan 17 00:26:46.110924 kernel: io scheduler bfq registered Jan 17 00:26:46.110938 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:26:46.110952 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:26:46.110965 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:26:46.110979 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:26:46.110995 kernel: i8042: PNP: No PS/2 controller found. Jan 17 00:26:46.111184 kernel: rtc_cmos 00:02: registered as rtc0 Jan 17 00:26:46.111320 kernel: rtc_cmos 00:02: setting system clock to 2026-01-17T00:26:45 UTC (1768609605) Jan 17 00:26:46.111440 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 17 00:26:46.111459 kernel: intel_pstate: CPU model not supported Jan 17 00:26:46.111474 kernel: efifb: probing for efifb Jan 17 00:26:46.111490 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:26:46.111509 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:26:46.111524 kernel: efifb: scrolling: redraw Jan 17 00:26:46.111539 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:26:46.111555 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:26:46.111570 kernel: fb0: EFI VGA frame buffer device Jan 17 00:26:46.111585 kernel: pstore: Using crash dump compression: deflate Jan 17 00:26:46.111600 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:26:46.111615 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:26:46.111630 kernel: Segment Routing with IPv6 Jan 17 00:26:46.111648 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:26:46.111663 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:26:46.111679 kernel: Key type dns_resolver registered Jan 17 00:26:46.111693 kernel: IPI shorthand broadcast: enabled Jan 17 00:26:46.116730 kernel: sched_clock: Marking stable (870003000, 55142100)->(1162673200, -237528100) Jan 17 00:26:46.116755 kernel: registered taskstats version 1 Jan 17 00:26:46.116771 kernel: Loading compiled-in X.509 certificates Jan 17 00:26:46.116785 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:26:46.116800 kernel: Key type .fscrypt registered Jan 17 00:26:46.116819 kernel: Key type fscrypt-provisioning registered Jan 17 00:26:46.116833 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:26:46.116848 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:26:46.116862 kernel: ima: No architecture policies found Jan 17 00:26:46.116877 kernel: clk: Disabling unused clocks Jan 17 00:26:46.116891 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:26:46.116905 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:26:46.116920 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:26:46.116934 kernel: Run /init as init process Jan 17 00:26:46.116951 kernel: with arguments: Jan 17 00:26:46.116965 kernel: /init Jan 17 00:26:46.116979 kernel: with environment: Jan 17 00:26:46.116993 kernel: HOME=/ Jan 17 00:26:46.117007 kernel: TERM=linux Jan 17 00:26:46.117024 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:26:46.117041 systemd[1]: Detected virtualization microsoft. Jan 17 00:26:46.117056 systemd[1]: Detected architecture x86-64. Jan 17 00:26:46.117074 systemd[1]: Running in initrd. Jan 17 00:26:46.117089 systemd[1]: No hostname configured, using default hostname. Jan 17 00:26:46.117103 systemd[1]: Hostname set to . Jan 17 00:26:46.117119 systemd[1]: Initializing machine ID from random generator. Jan 17 00:26:46.117133 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:26:46.117148 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:26:46.117164 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:26:46.117180 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:26:46.117198 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:26:46.117213 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:26:46.117228 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:26:46.117245 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:26:46.117261 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:26:46.117276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:26:46.117291 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:26:46.117309 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:26:46.117324 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:26:46.117339 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:26:46.117354 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:26:46.117369 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:26:46.117384 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:26:46.117400 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:26:46.117415 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:26:46.117430 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:26:46.117448 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:26:46.117463 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:26:46.117478 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:26:46.117493 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:26:46.117508 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:26:46.117523 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:26:46.117539 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:26:46.117553 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:26:46.117571 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:26:46.117587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:46.117627 systemd-journald[177]: Collecting audit messages is disabled. Jan 17 00:26:46.117660 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:26:46.117678 systemd-journald[177]: Journal started Jan 17 00:26:46.117718 systemd-journald[177]: Runtime Journal (/run/log/journal/c8dcb6734b334efd93ca3c6a2e1936a7) is 8.0M, max 158.7M, 150.7M free. Jan 17 00:26:46.121898 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:26:46.126816 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:26:46.133414 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:26:46.137295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:46.147646 systemd-modules-load[178]: Inserted module 'overlay' Jan 17 00:26:46.148024 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:26:46.165634 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:26:46.172866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:26:46.191280 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:46.198095 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:26:46.216939 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:26:46.229058 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:26:46.231100 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:26:46.244001 dracut-cmdline[203]: dracut-dracut-053 Jan 17 00:26:46.250686 kernel: Bridge firewalling registered Jan 17 00:26:46.244771 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 17 00:26:46.259280 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:26:46.247893 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:26:46.253545 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:26:46.278852 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:26:46.294438 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:26:46.300479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:26:46.314888 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:26:46.362779 systemd-resolved[256]: Positive Trust Anchors: Jan 17 00:26:46.362797 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:26:46.362850 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:26:46.401263 kernel: SCSI subsystem initialized Jan 17 00:26:46.392788 systemd-resolved[256]: Defaulting to hostname 'linux'. Jan 17 00:26:46.394058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:26:46.397677 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:26:46.412326 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:26:46.424727 kernel: iscsi: registered transport (tcp) Jan 17 00:26:46.447540 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:26:46.447633 kernel: QLogic iSCSI HBA Driver Jan 17 00:26:46.485029 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:26:46.495906 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:26:46.525043 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:26:46.525142 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:26:46.529727 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:26:46.569736 kernel: raid6: avx512x4 gen() 18077 MB/s Jan 17 00:26:46.589725 kernel: raid6: avx512x2 gen() 18077 MB/s Jan 17 00:26:46.608720 kernel: raid6: avx512x1 gen() 18095 MB/s Jan 17 00:26:46.627718 kernel: raid6: avx2x4 gen() 18112 MB/s Jan 17 00:26:46.647726 kernel: raid6: avx2x2 gen() 18041 MB/s Jan 17 00:26:46.667823 kernel: raid6: avx2x1 gen() 13847 MB/s Jan 17 00:26:46.667857 kernel: raid6: using algorithm avx2x4 gen() 18112 MB/s Jan 17 00:26:46.690931 kernel: raid6: .... xor() 6202 MB/s, rmw enabled Jan 17 00:26:46.690960 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:26:46.714735 kernel: xor: automatically using best checksumming function avx Jan 17 00:26:46.863755 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:26:46.873754 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:26:46.890891 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:26:46.905673 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 17 00:26:46.910419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:26:46.934893 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:26:46.949563 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 17 00:26:46.980162 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:26:46.989062 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:26:47.034131 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:26:47.048991 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:26:47.080852 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:26:47.088483 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:26:47.095580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:26:47.102408 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:26:47.113969 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:26:47.140478 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:26:47.141954 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:26:47.156185 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:26:47.156405 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:47.160615 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:26:47.163920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:26:47.193268 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:26:47.193298 kernel: AES CTR mode by8 optimization enabled Jan 17 00:26:47.164171 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:47.200020 kernel: hv_vmbus: Vmbus version:5.2 Jan 17 00:26:47.167299 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:47.190190 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:47.211220 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:26:47.214304 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:47.226887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:47.236731 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:26:47.250703 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:26:47.253598 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:48.412181 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:26:48.412220 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:26:48.412241 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:26:48.412271 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 17 00:26:48.412288 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:26:48.415638 kernel: PTP clock support registered Jan 17 00:26:48.415663 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:26:48.415683 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:26:48.415702 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:26:48.415726 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:26:48.415745 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:26:48.415762 kernel: scsi host1: storvsc_host_t Jan 17 00:26:48.415997 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:26:48.416010 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 17 00:26:48.416021 kernel: scsi host0: storvsc_host_t Jan 17 00:26:48.416145 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:26:48.416294 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:26:48.416433 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:26:48.416449 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:26:48.357146 systemd-resolved[256]: Clock change detected. Flushing caches. Jan 17 00:26:48.430026 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:26:48.430063 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:26:48.423012 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:26:48.455427 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:48.466067 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#224 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:26:48.472222 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:26:48.472574 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:26:48.473872 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:26:48.476860 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:26:48.477054 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:26:48.487859 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:26:48.492094 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:26:48.505868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:26:48.577875 kernel: hv_netvsc 000d3a67-5b7c-000d-3a67-5b7c000d3a67 eth0: VF slot 1 added Jan 17 00:26:48.599869 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:26:48.605139 kernel: hv_pci d400ee3c-2159-4fb3-8452-05b5b6d68f95: PCI VMBus probing: Using version 0x10004 Jan 17 00:26:48.605387 kernel: hv_pci d400ee3c-2159-4fb3-8452-05b5b6d68f95: PCI host bridge to bus 2159:00 Jan 17 00:26:48.611201 kernel: pci_bus 2159:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 17 00:26:48.614682 kernel: pci_bus 2159:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:26:48.619978 kernel: pci 2159:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 17 00:26:48.624993 kernel: pci 2159:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 00:26:48.629937 kernel: pci 2159:00:02.0: enabling Extended Tags Jan 17 00:26:48.644868 kernel: pci 2159:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2159:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 17 00:26:48.644940 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (442) Jan 17 00:26:48.653771 kernel: pci_bus 2159:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:26:48.659820 kernel: pci 2159:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 00:26:48.681875 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (443) Jan 17 00:26:48.711536 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:26:48.730548 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:26:48.748251 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:26:48.763363 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:26:48.766858 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:26:48.789098 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:26:48.813860 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:26:48.824899 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:26:48.834859 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:26:48.973865 kernel: mlx5_core 2159:00:02.0: enabling device (0000 -> 0002) Jan 17 00:26:48.983881 kernel: mlx5_core 2159:00:02.0: firmware version: 14.30.5026 Jan 17 00:26:49.222936 kernel: hv_netvsc 000d3a67-5b7c-000d-3a67-5b7c000d3a67 eth0: VF registering: eth1 Jan 17 00:26:49.227899 kernel: mlx5_core 2159:00:02.0 eth1: joined to eth0 Jan 17 00:26:49.235875 kernel: mlx5_core 2159:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 17 00:26:49.250884 kernel: mlx5_core 2159:00:02.0 enP8537s1: renamed from eth1 Jan 17 00:26:49.844897 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:26:49.847325 disk-uuid[596]: The operation has completed successfully. Jan 17 00:26:49.936219 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:26:49.936344 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:26:49.964018 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:26:49.970511 sh[719]: Success Jan 17 00:26:49.990933 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:26:50.083502 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:26:50.099018 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:26:50.106264 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:26:50.138865 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:26:50.138921 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:50.144365 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:26:50.147479 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:26:50.150066 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:26:50.219997 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:26:50.225489 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:26:50.235028 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:26:50.241740 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:26:50.264910 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:50.264968 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:50.264989 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:26:50.279865 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:26:50.290482 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:26:50.297858 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:50.306567 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:26:50.325123 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:26:50.339204 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:26:50.352003 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:26:50.376600 systemd-networkd[903]: lo: Link UP Jan 17 00:26:50.376612 systemd-networkd[903]: lo: Gained carrier Jan 17 00:26:50.380678 systemd-networkd[903]: Enumeration completed Jan 17 00:26:50.380959 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:26:50.384624 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:26:50.384630 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:26:50.390080 systemd[1]: Reached target network.target - Network. Jan 17 00:26:50.453866 kernel: mlx5_core 2159:00:02.0 enP8537s1: Link up Jan 17 00:26:50.492603 kernel: hv_netvsc 000d3a67-5b7c-000d-3a67-5b7c000d3a67 eth0: Data path switched to VF: enP8537s1 Jan 17 00:26:50.487929 systemd-networkd[903]: enP8537s1: Link UP Jan 17 00:26:50.488046 systemd-networkd[903]: eth0: Link UP Jan 17 00:26:50.496966 systemd-networkd[903]: eth0: Gained carrier Jan 17 00:26:50.496983 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:26:50.506787 systemd-networkd[903]: enP8537s1: Gained carrier Jan 17 00:26:50.543782 systemd-networkd[903]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 00:26:50.579242 ignition[884]: Ignition 2.19.0 Jan 17 00:26:50.579258 ignition[884]: Stage: fetch-offline Jan 17 00:26:50.579311 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:50.579322 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:50.579457 ignition[884]: parsed url from cmdline: "" Jan 17 00:26:50.586913 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:26:50.579462 ignition[884]: no config URL provided Jan 17 00:26:50.579469 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:26:50.579480 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:26:50.579487 ignition[884]: failed to fetch config: resource requires networking Jan 17 00:26:50.583034 ignition[884]: Ignition finished successfully Jan 17 00:26:50.605931 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:26:50.622629 ignition[912]: Ignition 2.19.0 Jan 17 00:26:50.622643 ignition[912]: Stage: fetch Jan 17 00:26:50.622870 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:50.622884 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:50.622984 ignition[912]: parsed url from cmdline: "" Jan 17 00:26:50.622987 ignition[912]: no config URL provided Jan 17 00:26:50.622992 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:26:50.622998 ignition[912]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:26:50.623017 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:26:50.691106 ignition[912]: GET result: OK Jan 17 00:26:50.691224 ignition[912]: config has been read from IMDS userdata Jan 17 00:26:50.691260 ignition[912]: parsing config with SHA512: e8c7d6511235e2a83bda5f47b8ec3596df7e24022bde52ba0f491f10e872494112f447190c28d21a53170a373c240732491504c5ba2766b363a78e280ffcfa35 Jan 17 00:26:50.697233 unknown[912]: fetched base config from "system" Jan 17 00:26:50.697978 ignition[912]: fetch: fetch complete Jan 17 00:26:50.697245 unknown[912]: fetched base config from "system" Jan 17 00:26:50.697986 ignition[912]: fetch: fetch passed Jan 17 00:26:50.697255 unknown[912]: fetched user config from "azure" Jan 17 00:26:50.698050 ignition[912]: Ignition finished successfully Jan 17 00:26:50.712082 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:26:50.725994 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:26:50.745804 ignition[918]: Ignition 2.19.0 Jan 17 00:26:50.745817 ignition[918]: Stage: kargs Jan 17 00:26:50.748556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:26:50.746053 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:50.746067 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:50.746987 ignition[918]: kargs: kargs passed Jan 17 00:26:50.747040 ignition[918]: Ignition finished successfully Jan 17 00:26:50.769035 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:26:50.788379 ignition[924]: Ignition 2.19.0 Jan 17 00:26:50.788392 ignition[924]: Stage: disks Jan 17 00:26:50.788635 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:50.788649 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:50.794412 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:26:50.789583 ignition[924]: disks: disks passed Jan 17 00:26:50.797722 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:26:50.789644 ignition[924]: Ignition finished successfully Jan 17 00:26:50.803238 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:26:50.806686 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:26:50.811909 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:26:50.816737 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:26:50.846021 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:26:50.879026 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:26:50.884055 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:26:50.898715 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:26:50.998870 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:26:50.999415 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:26:51.004626 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:26:51.021992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:26:51.031245 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:26:51.038047 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:26:51.052514 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Jan 17 00:26:51.052559 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:51.045934 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:26:51.063915 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:51.063957 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:26:51.045973 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:26:51.072837 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:26:51.079493 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:26:51.082047 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:26:51.092244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:26:51.223388 coreos-metadata[945]: Jan 17 00:26:51.223 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:26:51.229686 coreos-metadata[945]: Jan 17 00:26:51.229 INFO Fetch successful Jan 17 00:26:51.229686 coreos-metadata[945]: Jan 17 00:26:51.229 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:26:51.240687 coreos-metadata[945]: Jan 17 00:26:51.240 INFO Fetch successful Jan 17 00:26:51.246912 coreos-metadata[945]: Jan 17 00:26:51.246 INFO wrote hostname ci-4081.3.6-n-c809bb5d02 to /sysroot/etc/hostname Jan 17 00:26:51.253242 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:26:51.270821 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:26:51.286453 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:26:51.292205 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:26:51.301927 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:26:51.573409 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:26:51.580088 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:26:51.588033 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:26:51.600159 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:26:51.606655 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:51.637562 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:26:51.643344 ignition[1063]: INFO : Ignition 2.19.0 Jan 17 00:26:51.643344 ignition[1063]: INFO : Stage: mount Jan 17 00:26:51.643344 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:51.643344 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:51.643344 ignition[1063]: INFO : mount: mount passed Jan 17 00:26:51.643344 ignition[1063]: INFO : Ignition finished successfully Jan 17 00:26:51.645085 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:26:51.661916 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:26:51.680039 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:26:51.698870 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1073) Jan 17 00:26:51.706929 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:51.706992 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:51.709714 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:26:51.718866 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:26:51.719299 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:26:51.747516 ignition[1090]: INFO : Ignition 2.19.0 Jan 17 00:26:51.747516 ignition[1090]: INFO : Stage: files Jan 17 00:26:51.751917 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:51.751917 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:51.751917 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:26:51.761301 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:26:51.761301 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:26:51.789439 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:26:51.793556 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:26:51.797707 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:26:51.793882 unknown[1090]: wrote ssh authorized keys file for user: core Jan 17 00:26:51.804120 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:26:51.804120 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:26:51.843805 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:26:51.885942 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 00:26:52.088047 systemd-networkd[903]: eth0: Gained IPv6LL Jan 17 00:26:52.323805 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:26:52.649222 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:26:52.649222 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:26:52.659481 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:26:52.665079 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:26:52.665079 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:26:52.673638 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:26:52.673638 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:26:52.681355 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:26:52.686089 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:26:52.690803 ignition[1090]: INFO : files: files passed Jan 17 00:26:52.692849 ignition[1090]: INFO : Ignition finished successfully Jan 17 00:26:52.696772 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:26:52.706386 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:26:52.713467 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:26:52.720910 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:26:52.722255 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:26:52.746422 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:26:52.746422 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:26:52.758955 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:26:52.750595 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:26:52.767392 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:26:52.778031 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:26:52.803459 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:26:52.803608 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:26:52.813670 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:26:52.816514 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:26:52.822122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:26:52.830091 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:26:52.846324 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:26:52.857093 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:26:52.869394 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:26:52.875857 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:26:52.879555 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:26:52.886945 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:26:52.887118 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:26:52.892978 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:26:52.898230 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:26:52.905028 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:26:52.910360 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:26:52.913827 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:26:52.922274 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:26:52.927573 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:26:52.934073 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:26:52.939323 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:26:52.940331 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:26:52.940737 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:26:52.940983 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:26:52.941688 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:26:52.942259 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:26:52.942658 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:26:52.952106 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:26:52.957523 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:26:52.964576 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:26:52.970727 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:26:52.970866 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:26:52.979173 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:26:52.979334 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:26:52.983984 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:26:52.984133 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:26:53.003880 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:26:53.012104 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:26:53.015913 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:26:53.016069 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:26:53.026133 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:26:53.026280 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:26:53.038147 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:26:53.049356 ignition[1143]: INFO : Ignition 2.19.0 Jan 17 00:26:53.049356 ignition[1143]: INFO : Stage: umount Jan 17 00:26:53.049356 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:53.049356 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:53.038258 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:26:53.067196 ignition[1143]: INFO : umount: umount passed Jan 17 00:26:53.067196 ignition[1143]: INFO : Ignition finished successfully Jan 17 00:26:53.055241 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:26:53.056906 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:26:53.068498 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:26:53.069411 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:26:53.069520 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:26:53.076829 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:26:53.076908 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:26:53.083468 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:26:53.083523 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:26:53.083834 systemd[1]: Stopped target network.target - Network. Jan 17 00:26:53.084235 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:26:53.084279 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:26:53.084757 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:26:53.087980 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:26:53.105180 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:26:53.109590 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:26:53.110590 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:26:53.111500 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:26:53.111554 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:26:53.111933 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:26:53.111969 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:26:53.112356 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:26:53.112402 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:26:53.112888 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:26:53.112937 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:26:53.113594 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:26:53.113883 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:26:53.144910 systemd-networkd[903]: eth0: DHCPv6 lease lost Jan 17 00:26:53.148413 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:26:53.148564 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:26:53.154022 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:26:53.154069 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:26:53.180079 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:26:53.183754 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:26:53.183833 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:26:53.192598 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:26:53.197532 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:26:53.197665 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:26:53.228541 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:26:53.230951 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:26:53.244393 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:26:53.244493 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:26:53.247267 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:26:53.247308 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:26:53.252507 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:26:53.252566 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:26:53.258737 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:26:53.258789 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:26:53.264032 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:26:53.264082 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:53.281094 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:26:53.291047 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:26:53.306772 kernel: hv_netvsc 000d3a67-5b7c-000d-3a67-5b7c000d3a67 eth0: Data path switched from VF: enP8537s1 Jan 17 00:26:53.291122 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:26:53.297157 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:26:53.297213 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:26:53.306821 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:26:53.306904 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:26:53.310062 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:26:53.310118 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:26:53.313307 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:26:53.313357 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:26:53.322033 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:26:53.324564 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:26:53.350431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:26:53.350511 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:53.356588 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:26:53.356721 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:26:53.361293 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:26:53.361383 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:26:53.570888 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:26:53.571035 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:26:53.576987 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:26:53.582049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:26:53.582131 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:26:53.599080 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:26:53.609488 systemd[1]: Switching root. Jan 17 00:26:53.645870 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 17 00:26:53.645947 systemd-journald[177]: Journal stopped Jan 17 00:26:46.107482 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 16 22:25:55 -00 2026 Jan 17 00:26:46.107511 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:26:46.107524 kernel: BIOS-provided physical RAM map: Jan 17 00:26:46.107533 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 00:26:46.107540 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 17 00:26:46.107546 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000000437dfff] usable Jan 17 00:26:46.107558 kernel: BIOS-e820: [mem 0x000000000437e000-0x000000000477dfff] reserved Jan 17 00:26:46.107564 kernel: BIOS-e820: [mem 0x000000000477e000-0x000000003ff1efff] usable Jan 17 00:26:46.107575 kernel: BIOS-e820: [mem 0x000000003ff1f000-0x000000003ff73fff] type 20 Jan 17 00:26:46.107583 kernel: BIOS-e820: [mem 0x000000003ff74000-0x000000003ffc8fff] reserved Jan 17 00:26:46.107590 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 17 00:26:46.107600 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 17 00:26:46.107606 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 17 00:26:46.107613 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 17 00:26:46.107627 kernel: printk: bootconsole [earlyser0] enabled Jan 17 00:26:46.107634 kernel: NX (Execute Disable) protection: active Jan 17 00:26:46.107646 kernel: APIC: Static calls initialized Jan 17 00:26:46.107653 kernel: efi: EFI v2.7 by Microsoft Jan 17 00:26:46.107662 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff88000 SMBIOS 3.0=0x3ff86000 MEMATTR=0x3ee82698 Jan 17 00:26:46.107671 kernel: SMBIOS 3.1.0 present. Jan 17 00:26:46.107679 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 06/10/2025 Jan 17 00:26:46.107690 kernel: Hypervisor detected: Microsoft Hyper-V Jan 17 00:26:46.107697 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 17 00:26:46.107721 kernel: Hyper-V: Host Build 10.0.26102.1145-1-0 Jan 17 00:26:46.107733 kernel: Hyper-V: Nested features: 0x1e0101 Jan 17 00:26:46.107742 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 17 00:26:46.107753 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 17 00:26:46.107761 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 00:26:46.107768 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 00:26:46.107780 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 17 00:26:46.107787 kernel: tsc: Detected 2593.907 MHz processor Jan 17 00:26:46.107798 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 00:26:46.107806 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 00:26:46.107814 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 17 00:26:46.107826 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 00:26:46.107834 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 00:26:46.107845 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 17 00:26:46.107852 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 17 00:26:46.107862 kernel: Using GB pages for direct mapping Jan 17 00:26:46.107871 kernel: Secure boot disabled Jan 17 00:26:46.107885 kernel: ACPI: Early table checksum verification disabled Jan 17 00:26:46.107896 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 17 00:26:46.107907 kernel: ACPI: XSDT 0x000000003FFF90E8 00005C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107915 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107927 kernel: ACPI: DSDT 0x000000003FFD6000 01E22B (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 00:26:46.107934 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 17 00:26:46.107946 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107954 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107967 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107975 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107986 kernel: ACPI: SRAT 0x000000003FFD4000 0001E0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.107995 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 00:26:46.108002 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 17 00:26:46.108014 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff422a] Jan 17 00:26:46.108022 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 17 00:26:46.108030 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 17 00:26:46.108042 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 17 00:26:46.108051 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 17 00:26:46.108063 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 17 00:26:46.108071 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd41df] Jan 17 00:26:46.108082 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 17 00:26:46.108091 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 00:26:46.108099 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 00:26:46.108110 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 17 00:26:46.108118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 17 00:26:46.108130 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 17 00:26:46.108139 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 17 00:26:46.108148 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 17 00:26:46.108159 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 17 00:26:46.108167 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 17 00:26:46.108178 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 17 00:26:46.108186 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 17 00:26:46.108196 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 17 00:26:46.108206 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 17 00:26:46.108218 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 17 00:26:46.108228 kernel: Zone ranges: Jan 17 00:26:46.108235 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 00:26:46.108247 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 00:26:46.108257 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 00:26:46.108266 kernel: Movable zone start for each node Jan 17 00:26:46.108274 kernel: Early memory node ranges Jan 17 00:26:46.108285 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 00:26:46.108293 kernel: node 0: [mem 0x0000000000100000-0x000000000437dfff] Jan 17 00:26:46.108307 kernel: node 0: [mem 0x000000000477e000-0x000000003ff1efff] Jan 17 00:26:46.108314 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 17 00:26:46.108326 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 00:26:46.108333 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 17 00:26:46.108344 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 00:26:46.108353 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 00:26:46.108360 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Jan 17 00:26:46.108372 kernel: On node 0, zone DMA32: 224 pages in unavailable ranges Jan 17 00:26:46.108380 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 17 00:26:46.108391 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 17 00:26:46.108401 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 17 00:26:46.108408 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 00:26:46.108420 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 00:26:46.108428 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 17 00:26:46.108436 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 00:26:46.108447 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 17 00:26:46.108455 kernel: Booting paravirtualized kernel on Hyper-V Jan 17 00:26:46.108467 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 00:26:46.108477 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 00:26:46.108489 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Jan 17 00:26:46.108496 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Jan 17 00:26:46.108507 kernel: pcpu-alloc: [0] 0 1 Jan 17 00:26:46.108515 kernel: Hyper-V: PV spinlocks enabled Jan 17 00:26:46.108523 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 00:26:46.108536 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:26:46.108544 kernel: random: crng init done Jan 17 00:26:46.108557 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 00:26:46.108565 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:26:46.108573 kernel: Fallback order for Node 0: 0 Jan 17 00:26:46.108584 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2061321 Jan 17 00:26:46.108592 kernel: Policy zone: Normal Jan 17 00:26:46.108604 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:26:46.108612 kernel: software IO TLB: area num 2. Jan 17 00:26:46.108623 kernel: Memory: 8056460K/8383228K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42884K init, 2312K bss, 326508K reserved, 0K cma-reserved) Jan 17 00:26:46.108632 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:26:46.108652 kernel: ftrace: allocating 37989 entries in 149 pages Jan 17 00:26:46.108665 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 00:26:46.108673 kernel: Dynamic Preempt: voluntary Jan 17 00:26:46.108687 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:26:46.108699 kernel: rcu: RCU event tracing is enabled. Jan 17 00:26:46.108717 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:26:46.108729 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:26:46.108738 kernel: Rude variant of Tasks RCU enabled. Jan 17 00:26:46.108750 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:26:46.108761 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:26:46.108774 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:26:46.108785 kernel: Using NULL legacy PIC Jan 17 00:26:46.108795 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 17 00:26:46.108806 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:26:46.108815 kernel: Console: colour dummy device 80x25 Jan 17 00:26:46.108828 kernel: printk: console [tty1] enabled Jan 17 00:26:46.108856 kernel: printk: console [ttyS0] enabled Jan 17 00:26:46.108885 kernel: printk: bootconsole [earlyser0] disabled Jan 17 00:26:46.108905 kernel: ACPI: Core revision 20230628 Jan 17 00:26:46.108924 kernel: Failed to register legacy timer interrupt Jan 17 00:26:46.108942 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 00:26:46.108958 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 00:26:46.108977 kernel: Hyper-V: Using IPI hypercalls Jan 17 00:26:46.108993 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 17 00:26:46.109009 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 17 00:26:46.109025 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 17 00:26:46.109047 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 17 00:26:46.109063 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 17 00:26:46.109081 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 17 00:26:46.109099 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 17 00:26:46.109118 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 00:26:46.109136 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 Jan 17 00:26:46.109153 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 00:26:46.109168 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 00:26:46.109183 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jan 17 00:26:46.109215 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 00:26:46.109235 kernel: RETBleed: Vulnerable Jan 17 00:26:46.109250 kernel: Speculative Store Bypass: Vulnerable Jan 17 00:26:46.109265 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:26:46.109282 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 00:26:46.109299 kernel: active return thunk: its_return_thunk Jan 17 00:26:46.109314 kernel: ITS: Mitigation: Aligned branch/return thunks Jan 17 00:26:46.109331 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 00:26:46.109350 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 00:26:46.109369 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 00:26:46.109384 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 00:26:46.109408 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 00:26:46.109423 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 00:26:46.109443 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 00:26:46.109462 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 17 00:26:46.109481 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 17 00:26:46.109499 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 17 00:26:46.109516 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 17 00:26:46.109533 kernel: Freeing SMP alternatives memory: 32K Jan 17 00:26:46.109548 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:26:46.109563 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:26:46.109578 kernel: landlock: Up and running. Jan 17 00:26:46.109591 kernel: SELinux: Initializing. Jan 17 00:26:46.109613 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:26:46.109627 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 00:26:46.109641 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 00:26:46.109653 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:26:46.109668 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:26:46.109681 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:26:46.109696 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 00:26:46.109726 kernel: signal: max sigframe size: 3632 Jan 17 00:26:46.109741 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:26:46.109759 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:26:46.109774 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 00:26:46.109789 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:26:46.109804 kernel: smpboot: x86: Booting SMP configuration: Jan 17 00:26:46.109819 kernel: .... node #0, CPUs: #1 Jan 17 00:26:46.109834 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 17 00:26:46.109851 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 00:26:46.109865 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:26:46.109880 kernel: smpboot: Max logical packages: 1 Jan 17 00:26:46.109899 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 17 00:26:46.109914 kernel: devtmpfs: initialized Jan 17 00:26:46.109929 kernel: x86/mm: Memory block size: 128MB Jan 17 00:26:46.109944 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 17 00:26:46.109959 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:26:46.109974 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:26:46.109989 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:26:46.110004 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:26:46.110019 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:26:46.110037 kernel: audit: type=2000 audit(1768609605.030:1): state=initialized audit_enabled=0 res=1 Jan 17 00:26:46.110051 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:26:46.110066 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 00:26:46.110081 kernel: cpuidle: using governor menu Jan 17 00:26:46.110096 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:26:46.110111 kernel: dca service started, version 1.12.1 Jan 17 00:26:46.110126 kernel: e820: reserve RAM buffer [mem 0x0437e000-0x07ffffff] Jan 17 00:26:46.110141 kernel: e820: reserve RAM buffer [mem 0x3ff1f000-0x3fffffff] Jan 17 00:26:46.110156 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 00:26:46.110174 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:26:46.110189 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:26:46.110204 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:26:46.110219 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:26:46.110234 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:26:46.110250 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:26:46.110265 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:26:46.110280 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:26:46.110299 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 00:26:46.110313 kernel: ACPI: Interpreter enabled Jan 17 00:26:46.110329 kernel: ACPI: PM: (supports S0 S5) Jan 17 00:26:46.110344 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 00:26:46.110359 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 00:26:46.110373 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 00:26:46.110388 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 17 00:26:46.110403 kernel: iommu: Default domain type: Translated Jan 17 00:26:46.110418 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 00:26:46.110433 kernel: efivars: Registered efivars operations Jan 17 00:26:46.110452 kernel: PCI: Using ACPI for IRQ routing Jan 17 00:26:46.110467 kernel: PCI: System does not support PCI Jan 17 00:26:46.110482 kernel: vgaarb: loaded Jan 17 00:26:46.110497 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 17 00:26:46.110512 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:26:46.110527 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:26:46.110542 kernel: pnp: PnP ACPI init Jan 17 00:26:46.110558 kernel: pnp: PnP ACPI: found 3 devices Jan 17 00:26:46.110573 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 00:26:46.110590 kernel: NET: Registered PF_INET protocol family Jan 17 00:26:46.110605 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 00:26:46.110621 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 00:26:46.110636 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:26:46.110651 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:26:46.110667 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 00:26:46.110682 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 00:26:46.110697 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:26:46.110727 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 00:26:46.110744 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:26:46.110757 kernel: NET: Registered PF_XDP protocol family Jan 17 00:26:46.110774 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:26:46.110787 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 00:26:46.110799 kernel: software IO TLB: mapped [mem 0x000000003a878000-0x000000003e878000] (64MB) Jan 17 00:26:46.110812 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 00:26:46.110825 kernel: Initialise system trusted keyrings Jan 17 00:26:46.110840 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 00:26:46.110857 kernel: Key type asymmetric registered Jan 17 00:26:46.110870 kernel: Asymmetric key parser 'x509' registered Jan 17 00:26:46.110883 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 00:26:46.110896 kernel: io scheduler mq-deadline registered Jan 17 00:26:46.110911 kernel: io scheduler kyber registered Jan 17 00:26:46.110924 kernel: io scheduler bfq registered Jan 17 00:26:46.110938 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 00:26:46.110952 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:26:46.110965 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 00:26:46.110979 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 00:26:46.110995 kernel: i8042: PNP: No PS/2 controller found. Jan 17 00:26:46.111184 kernel: rtc_cmos 00:02: registered as rtc0 Jan 17 00:26:46.111320 kernel: rtc_cmos 00:02: setting system clock to 2026-01-17T00:26:45 UTC (1768609605) Jan 17 00:26:46.111440 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 17 00:26:46.111459 kernel: intel_pstate: CPU model not supported Jan 17 00:26:46.111474 kernel: efifb: probing for efifb Jan 17 00:26:46.111490 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 00:26:46.111509 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 00:26:46.111524 kernel: efifb: scrolling: redraw Jan 17 00:26:46.111539 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 00:26:46.111555 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:26:46.111570 kernel: fb0: EFI VGA frame buffer device Jan 17 00:26:46.111585 kernel: pstore: Using crash dump compression: deflate Jan 17 00:26:46.111600 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 00:26:46.111615 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:26:46.111630 kernel: Segment Routing with IPv6 Jan 17 00:26:46.111648 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:26:46.111663 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:26:46.111679 kernel: Key type dns_resolver registered Jan 17 00:26:46.111693 kernel: IPI shorthand broadcast: enabled Jan 17 00:26:46.116730 kernel: sched_clock: Marking stable (870003000, 55142100)->(1162673200, -237528100) Jan 17 00:26:46.116755 kernel: registered taskstats version 1 Jan 17 00:26:46.116771 kernel: Loading compiled-in X.509 certificates Jan 17 00:26:46.116785 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: b6a847a3a522371f15b0d5425f12279a240740e4' Jan 17 00:26:46.116800 kernel: Key type .fscrypt registered Jan 17 00:26:46.116819 kernel: Key type fscrypt-provisioning registered Jan 17 00:26:46.116833 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:26:46.116848 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:26:46.116862 kernel: ima: No architecture policies found Jan 17 00:26:46.116877 kernel: clk: Disabling unused clocks Jan 17 00:26:46.116891 kernel: Freeing unused kernel image (initmem) memory: 42884K Jan 17 00:26:46.116905 kernel: Write protecting the kernel read-only data: 36864k Jan 17 00:26:46.116920 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Jan 17 00:26:46.116934 kernel: Run /init as init process Jan 17 00:26:46.116951 kernel: with arguments: Jan 17 00:26:46.116965 kernel: /init Jan 17 00:26:46.116979 kernel: with environment: Jan 17 00:26:46.116993 kernel: HOME=/ Jan 17 00:26:46.117007 kernel: TERM=linux Jan 17 00:26:46.117024 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:26:46.117041 systemd[1]: Detected virtualization microsoft. Jan 17 00:26:46.117056 systemd[1]: Detected architecture x86-64. Jan 17 00:26:46.117074 systemd[1]: Running in initrd. Jan 17 00:26:46.117089 systemd[1]: No hostname configured, using default hostname. Jan 17 00:26:46.117103 systemd[1]: Hostname set to . Jan 17 00:26:46.117119 systemd[1]: Initializing machine ID from random generator. Jan 17 00:26:46.117133 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:26:46.117148 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:26:46.117164 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:26:46.117180 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:26:46.117198 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:26:46.117213 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:26:46.117228 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:26:46.117245 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:26:46.117261 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:26:46.117276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:26:46.117291 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:26:46.117309 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:26:46.117324 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:26:46.117339 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:26:46.117354 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:26:46.117369 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:26:46.117384 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:26:46.117400 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:26:46.117415 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:26:46.117430 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:26:46.117448 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:26:46.117463 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:26:46.117478 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:26:46.117493 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:26:46.117508 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:26:46.117523 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:26:46.117539 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:26:46.117553 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:26:46.117571 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:26:46.117587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:46.117627 systemd-journald[177]: Collecting audit messages is disabled. Jan 17 00:26:46.117660 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:26:46.117678 systemd-journald[177]: Journal started Jan 17 00:26:46.117718 systemd-journald[177]: Runtime Journal (/run/log/journal/c8dcb6734b334efd93ca3c6a2e1936a7) is 8.0M, max 158.7M, 150.7M free. Jan 17 00:26:46.121898 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:26:46.126816 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:26:46.133414 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:26:46.137295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:46.147646 systemd-modules-load[178]: Inserted module 'overlay' Jan 17 00:26:46.148024 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:26:46.165634 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:26:46.172866 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:26:46.191280 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:46.198095 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:26:46.216939 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:26:46.229058 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:26:46.231100 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:26:46.244001 dracut-cmdline[203]: dracut-dracut-053 Jan 17 00:26:46.250686 kernel: Bridge firewalling registered Jan 17 00:26:46.244771 systemd-modules-load[178]: Inserted module 'br_netfilter' Jan 17 00:26:46.259280 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5950c0a3c50d11b7bc07a3e3bf06049ed0b5a605b5e0b52a981b78f1c63eeedd Jan 17 00:26:46.247893 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:26:46.253545 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:26:46.278852 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:26:46.294438 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:26:46.300479 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:26:46.314888 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:26:46.362779 systemd-resolved[256]: Positive Trust Anchors: Jan 17 00:26:46.362797 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:26:46.362850 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:26:46.401263 kernel: SCSI subsystem initialized Jan 17 00:26:46.392788 systemd-resolved[256]: Defaulting to hostname 'linux'. Jan 17 00:26:46.394058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:26:46.397677 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:26:46.412326 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:26:46.424727 kernel: iscsi: registered transport (tcp) Jan 17 00:26:46.447540 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:26:46.447633 kernel: QLogic iSCSI HBA Driver Jan 17 00:26:46.485029 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:26:46.495906 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:26:46.525043 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:26:46.525142 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:26:46.529727 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:26:46.569736 kernel: raid6: avx512x4 gen() 18077 MB/s Jan 17 00:26:46.589725 kernel: raid6: avx512x2 gen() 18077 MB/s Jan 17 00:26:46.608720 kernel: raid6: avx512x1 gen() 18095 MB/s Jan 17 00:26:46.627718 kernel: raid6: avx2x4 gen() 18112 MB/s Jan 17 00:26:46.647726 kernel: raid6: avx2x2 gen() 18041 MB/s Jan 17 00:26:46.667823 kernel: raid6: avx2x1 gen() 13847 MB/s Jan 17 00:26:46.667857 kernel: raid6: using algorithm avx2x4 gen() 18112 MB/s Jan 17 00:26:46.690931 kernel: raid6: .... xor() 6202 MB/s, rmw enabled Jan 17 00:26:46.690960 kernel: raid6: using avx512x2 recovery algorithm Jan 17 00:26:46.714735 kernel: xor: automatically using best checksumming function avx Jan 17 00:26:46.863755 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:26:46.873754 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:26:46.890891 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:26:46.905673 systemd-udevd[399]: Using default interface naming scheme 'v255'. Jan 17 00:26:46.910419 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:26:46.934893 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:26:46.949563 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 17 00:26:46.980162 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:26:46.989062 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:26:47.034131 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:26:47.048991 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:26:47.080852 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:26:47.088483 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:26:47.095580 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:26:47.102408 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:26:47.113969 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:26:47.140478 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 00:26:47.141954 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:26:47.156185 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:26:47.156405 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:47.160615 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:26:47.163920 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:26:47.193268 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 00:26:47.193298 kernel: AES CTR mode by8 optimization enabled Jan 17 00:26:47.164171 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:47.200020 kernel: hv_vmbus: Vmbus version:5.2 Jan 17 00:26:47.167299 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:47.190190 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:47.211220 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:26:47.214304 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:47.226887 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:47.236731 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:26:47.250703 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 00:26:47.253598 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:48.412181 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 00:26:48.412220 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 00:26:48.412241 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 00:26:48.412271 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 17 00:26:48.412288 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 00:26:48.415638 kernel: PTP clock support registered Jan 17 00:26:48.415663 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 00:26:48.415683 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 00:26:48.415702 kernel: hv_vmbus: registering driver hv_utils Jan 17 00:26:48.415726 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 00:26:48.415745 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 00:26:48.415762 kernel: scsi host1: storvsc_host_t Jan 17 00:26:48.415997 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 00:26:48.416010 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 17 00:26:48.416021 kernel: scsi host0: storvsc_host_t Jan 17 00:26:48.416145 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 00:26:48.416294 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 5 Jan 17 00:26:48.416433 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 00:26:48.416449 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 00:26:48.357146 systemd-resolved[256]: Clock change detected. Flushing caches. Jan 17 00:26:48.430026 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 00:26:48.430063 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 00:26:48.423012 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:26:48.455427 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:48.466067 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#224 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:26:48.472222 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 00:26:48.472574 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 00:26:48.473872 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 00:26:48.476860 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 00:26:48.477054 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 00:26:48.487859 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:26:48.492094 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 00:26:48.505868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#248 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:26:48.577875 kernel: hv_netvsc 000d3a67-5b7c-000d-3a67-5b7c000d3a67 eth0: VF slot 1 added Jan 17 00:26:48.599869 kernel: hv_vmbus: registering driver hv_pci Jan 17 00:26:48.605139 kernel: hv_pci d400ee3c-2159-4fb3-8452-05b5b6d68f95: PCI VMBus probing: Using version 0x10004 Jan 17 00:26:48.605387 kernel: hv_pci d400ee3c-2159-4fb3-8452-05b5b6d68f95: PCI host bridge to bus 2159:00 Jan 17 00:26:48.611201 kernel: pci_bus 2159:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 17 00:26:48.614682 kernel: pci_bus 2159:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 00:26:48.619978 kernel: pci 2159:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 17 00:26:48.624993 kernel: pci 2159:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 00:26:48.629937 kernel: pci 2159:00:02.0: enabling Extended Tags Jan 17 00:26:48.644868 kernel: pci 2159:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2159:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 17 00:26:48.644940 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (442) Jan 17 00:26:48.653771 kernel: pci_bus 2159:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 00:26:48.659820 kernel: pci 2159:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 00:26:48.681875 kernel: BTRFS: device fsid a67b5ac0-cdfd-426d-9386-e029282f433a devid 1 transid 33 /dev/sda3 scanned by (udev-worker) (443) Jan 17 00:26:48.711536 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 00:26:48.730548 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 00:26:48.748251 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:26:48.763363 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 00:26:48.766858 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 00:26:48.789098 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:26:48.813860 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:26:48.824899 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:26:48.834859 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:26:48.973865 kernel: mlx5_core 2159:00:02.0: enabling device (0000 -> 0002) Jan 17 00:26:48.983881 kernel: mlx5_core 2159:00:02.0: firmware version: 14.30.5026 Jan 17 00:26:49.222936 kernel: hv_netvsc 000d3a67-5b7c-000d-3a67-5b7c000d3a67 eth0: VF registering: eth1 Jan 17 00:26:49.227899 kernel: mlx5_core 2159:00:02.0 eth1: joined to eth0 Jan 17 00:26:49.235875 kernel: mlx5_core 2159:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 17 00:26:49.250884 kernel: mlx5_core 2159:00:02.0 enP8537s1: renamed from eth1 Jan 17 00:26:49.844897 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 00:26:49.847325 disk-uuid[596]: The operation has completed successfully. Jan 17 00:26:49.936219 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:26:49.936344 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:26:49.964018 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:26:49.970511 sh[719]: Success Jan 17 00:26:49.990933 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 00:26:50.083502 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:26:50.099018 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:26:50.106264 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:26:50.138865 kernel: BTRFS info (device dm-0): first mount of filesystem a67b5ac0-cdfd-426d-9386-e029282f433a Jan 17 00:26:50.138921 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:50.144365 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:26:50.147479 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:26:50.150066 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:26:50.219997 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:26:50.225489 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:26:50.235028 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:26:50.241740 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:26:50.264910 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:50.264968 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:50.264989 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:26:50.279865 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:26:50.290482 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:26:50.297858 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:50.306567 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:26:50.325123 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:26:50.339204 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:26:50.352003 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:26:50.376600 systemd-networkd[903]: lo: Link UP Jan 17 00:26:50.376612 systemd-networkd[903]: lo: Gained carrier Jan 17 00:26:50.380678 systemd-networkd[903]: Enumeration completed Jan 17 00:26:50.380959 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:26:50.384624 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:26:50.384630 systemd-networkd[903]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:26:50.390080 systemd[1]: Reached target network.target - Network. Jan 17 00:26:50.453866 kernel: mlx5_core 2159:00:02.0 enP8537s1: Link up Jan 17 00:26:50.492603 kernel: hv_netvsc 000d3a67-5b7c-000d-3a67-5b7c000d3a67 eth0: Data path switched to VF: enP8537s1 Jan 17 00:26:50.487929 systemd-networkd[903]: enP8537s1: Link UP Jan 17 00:26:50.488046 systemd-networkd[903]: eth0: Link UP Jan 17 00:26:50.496966 systemd-networkd[903]: eth0: Gained carrier Jan 17 00:26:50.496983 systemd-networkd[903]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:26:50.506787 systemd-networkd[903]: enP8537s1: Gained carrier Jan 17 00:26:50.543782 systemd-networkd[903]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 00:26:50.579242 ignition[884]: Ignition 2.19.0 Jan 17 00:26:50.579258 ignition[884]: Stage: fetch-offline Jan 17 00:26:50.579311 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:50.579322 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:50.579457 ignition[884]: parsed url from cmdline: "" Jan 17 00:26:50.586913 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:26:50.579462 ignition[884]: no config URL provided Jan 17 00:26:50.579469 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:26:50.579480 ignition[884]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:26:50.579487 ignition[884]: failed to fetch config: resource requires networking Jan 17 00:26:50.583034 ignition[884]: Ignition finished successfully Jan 17 00:26:50.605931 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:26:50.622629 ignition[912]: Ignition 2.19.0 Jan 17 00:26:50.622643 ignition[912]: Stage: fetch Jan 17 00:26:50.622870 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:50.622884 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:50.622984 ignition[912]: parsed url from cmdline: "" Jan 17 00:26:50.622987 ignition[912]: no config URL provided Jan 17 00:26:50.622992 ignition[912]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:26:50.622998 ignition[912]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:26:50.623017 ignition[912]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 00:26:50.691106 ignition[912]: GET result: OK Jan 17 00:26:50.691224 ignition[912]: config has been read from IMDS userdata Jan 17 00:26:50.691260 ignition[912]: parsing config with SHA512: e8c7d6511235e2a83bda5f47b8ec3596df7e24022bde52ba0f491f10e872494112f447190c28d21a53170a373c240732491504c5ba2766b363a78e280ffcfa35 Jan 17 00:26:50.697233 unknown[912]: fetched base config from "system" Jan 17 00:26:50.697978 ignition[912]: fetch: fetch complete Jan 17 00:26:50.697245 unknown[912]: fetched base config from "system" Jan 17 00:26:50.697986 ignition[912]: fetch: fetch passed Jan 17 00:26:50.697255 unknown[912]: fetched user config from "azure" Jan 17 00:26:50.698050 ignition[912]: Ignition finished successfully Jan 17 00:26:50.712082 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:26:50.725994 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:26:50.745804 ignition[918]: Ignition 2.19.0 Jan 17 00:26:50.745817 ignition[918]: Stage: kargs Jan 17 00:26:50.748556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:26:50.746053 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:50.746067 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:50.746987 ignition[918]: kargs: kargs passed Jan 17 00:26:50.747040 ignition[918]: Ignition finished successfully Jan 17 00:26:50.769035 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:26:50.788379 ignition[924]: Ignition 2.19.0 Jan 17 00:26:50.788392 ignition[924]: Stage: disks Jan 17 00:26:50.788635 ignition[924]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:50.788649 ignition[924]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:50.794412 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:26:50.789583 ignition[924]: disks: disks passed Jan 17 00:26:50.797722 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:26:50.789644 ignition[924]: Ignition finished successfully Jan 17 00:26:50.803238 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:26:50.806686 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:26:50.811909 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:26:50.816737 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:26:50.846021 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:26:50.879026 systemd-fsck[932]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 00:26:50.884055 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:26:50.898715 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:26:50.998870 kernel: EXT4-fs (sda9): mounted filesystem ab055cfb-d92d-4784-aa05-26ea844796bc r/w with ordered data mode. Quota mode: none. Jan 17 00:26:50.999415 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:26:51.004626 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:26:51.021992 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:26:51.031245 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:26:51.038047 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 00:26:51.052514 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (943) Jan 17 00:26:51.052559 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:51.045934 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:26:51.063915 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:51.063957 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:26:51.045973 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:26:51.072837 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:26:51.079493 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:26:51.082047 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:26:51.092244 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:26:51.223388 coreos-metadata[945]: Jan 17 00:26:51.223 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:26:51.229686 coreos-metadata[945]: Jan 17 00:26:51.229 INFO Fetch successful Jan 17 00:26:51.229686 coreos-metadata[945]: Jan 17 00:26:51.229 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:26:51.240687 coreos-metadata[945]: Jan 17 00:26:51.240 INFO Fetch successful Jan 17 00:26:51.246912 coreos-metadata[945]: Jan 17 00:26:51.246 INFO wrote hostname ci-4081.3.6-n-c809bb5d02 to /sysroot/etc/hostname Jan 17 00:26:51.253242 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:26:51.270821 initrd-setup-root[973]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:26:51.286453 initrd-setup-root[980]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:26:51.292205 initrd-setup-root[987]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:26:51.301927 initrd-setup-root[994]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:26:51.573409 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:26:51.580088 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:26:51.588033 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:26:51.600159 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:26:51.606655 kernel: BTRFS info (device sda6): last unmount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:51.637562 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:26:51.643344 ignition[1063]: INFO : Ignition 2.19.0 Jan 17 00:26:51.643344 ignition[1063]: INFO : Stage: mount Jan 17 00:26:51.643344 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:51.643344 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:51.643344 ignition[1063]: INFO : mount: mount passed Jan 17 00:26:51.643344 ignition[1063]: INFO : Ignition finished successfully Jan 17 00:26:51.645085 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:26:51.661916 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:26:51.680039 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:26:51.698870 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (1073) Jan 17 00:26:51.706929 kernel: BTRFS info (device sda6): first mount of filesystem 0f2efc88-79cd-4337-a46a-d3848e5a06b0 Jan 17 00:26:51.706992 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 00:26:51.709714 kernel: BTRFS info (device sda6): using free space tree Jan 17 00:26:51.718866 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 00:26:51.719299 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:26:51.747516 ignition[1090]: INFO : Ignition 2.19.0 Jan 17 00:26:51.747516 ignition[1090]: INFO : Stage: files Jan 17 00:26:51.751917 ignition[1090]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:51.751917 ignition[1090]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:51.751917 ignition[1090]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:26:51.761301 ignition[1090]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:26:51.761301 ignition[1090]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:26:51.789439 ignition[1090]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:26:51.793556 ignition[1090]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:26:51.797707 ignition[1090]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:26:51.793882 unknown[1090]: wrote ssh authorized keys file for user: core Jan 17 00:26:51.804120 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:26:51.804120 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Jan 17 00:26:51.843805 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:26:51.885942 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:26:51.892108 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-x86-64.raw: attempt #1 Jan 17 00:26:52.088047 systemd-networkd[903]: eth0: Gained IPv6LL Jan 17 00:26:52.323805 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:26:52.649222 ignition[1090]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-x86-64.raw" Jan 17 00:26:52.649222 ignition[1090]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:26:52.659481 ignition[1090]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:26:52.665079 ignition[1090]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:26:52.665079 ignition[1090]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:26:52.673638 ignition[1090]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:26:52.673638 ignition[1090]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:26:52.681355 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:26:52.686089 ignition[1090]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:26:52.690803 ignition[1090]: INFO : files: files passed Jan 17 00:26:52.692849 ignition[1090]: INFO : Ignition finished successfully Jan 17 00:26:52.696772 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:26:52.706386 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:26:52.713467 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:26:52.720910 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:26:52.722255 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:26:52.746422 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:26:52.746422 initrd-setup-root-after-ignition[1119]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:26:52.758955 initrd-setup-root-after-ignition[1123]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:26:52.750595 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:26:52.767392 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:26:52.778031 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:26:52.803459 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:26:52.803608 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:26:52.813670 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:26:52.816514 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:26:52.822122 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:26:52.830091 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:26:52.846324 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:26:52.857093 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:26:52.869394 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:26:52.875857 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:26:52.879555 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:26:52.886945 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:26:52.887118 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:26:52.892978 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:26:52.898230 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:26:52.905028 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:26:52.910360 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:26:52.913827 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:26:52.922274 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:26:52.927573 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:26:52.934073 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:26:52.939323 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:26:52.940331 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:26:52.940737 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:26:52.940983 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:26:52.941688 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:26:52.942259 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:26:52.942658 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:26:52.952106 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:26:52.957523 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:26:52.964576 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:26:52.970727 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:26:52.970866 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:26:52.979173 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:26:52.979334 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:26:52.983984 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 00:26:52.984133 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 00:26:53.003880 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:26:53.012104 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:26:53.015913 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:26:53.016069 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:26:53.026133 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:26:53.026280 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:26:53.038147 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:26:53.049356 ignition[1143]: INFO : Ignition 2.19.0 Jan 17 00:26:53.049356 ignition[1143]: INFO : Stage: umount Jan 17 00:26:53.049356 ignition[1143]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:26:53.049356 ignition[1143]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 00:26:53.038258 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:26:53.067196 ignition[1143]: INFO : umount: umount passed Jan 17 00:26:53.067196 ignition[1143]: INFO : Ignition finished successfully Jan 17 00:26:53.055241 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:26:53.056906 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:26:53.068498 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:26:53.069411 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:26:53.069520 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:26:53.076829 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:26:53.076908 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:26:53.083468 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:26:53.083523 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:26:53.083834 systemd[1]: Stopped target network.target - Network. Jan 17 00:26:53.084235 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:26:53.084279 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:26:53.084757 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:26:53.087980 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:26:53.105180 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:26:53.109590 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:26:53.110590 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:26:53.111500 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:26:53.111554 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:26:53.111933 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:26:53.111969 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:26:53.112356 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:26:53.112402 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:26:53.112888 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:26:53.112937 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:26:53.113594 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:26:53.113883 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:26:53.144910 systemd-networkd[903]: eth0: DHCPv6 lease lost Jan 17 00:26:53.148413 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:26:53.148564 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:26:53.154022 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:26:53.154069 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:26:53.180079 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:26:53.183754 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:26:53.183833 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:26:53.192598 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:26:53.197532 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:26:53.197665 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:26:53.228541 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:26:53.230951 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:26:53.244393 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:26:53.244493 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:26:53.247267 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:26:53.247308 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:26:53.252507 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:26:53.252566 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:26:53.258737 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:26:53.258789 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:26:53.264032 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:26:53.264082 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:26:53.281094 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:26:53.291047 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:26:53.306772 kernel: hv_netvsc 000d3a67-5b7c-000d-3a67-5b7c000d3a67 eth0: Data path switched from VF: enP8537s1 Jan 17 00:26:53.291122 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:26:53.297157 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:26:53.297213 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:26:53.306821 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:26:53.306904 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:26:53.310062 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:26:53.310118 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:26:53.313307 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:26:53.313357 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:26:53.322033 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:26:53.324564 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:26:53.350431 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:26:53.350511 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:53.356588 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:26:53.356721 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:26:53.361293 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:26:53.361383 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:26:53.570888 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:26:53.571035 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:26:53.576987 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:26:53.582049 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:26:53.582131 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:26:53.599080 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:26:53.609488 systemd[1]: Switching root. Jan 17 00:26:53.645870 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). Jan 17 00:26:53.645947 systemd-journald[177]: Journal stopped Jan 17 00:26:55.978771 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:26:55.978807 kernel: SELinux: policy capability open_perms=1 Jan 17 00:26:55.978829 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:26:55.978872 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:26:55.978888 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:26:55.978902 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:26:55.978919 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:26:55.978935 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:26:55.978953 kernel: audit: type=1403 audit(1768609614.483:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:26:55.978971 systemd[1]: Successfully loaded SELinux policy in 69.677ms. Jan 17 00:26:55.978989 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.032ms. Jan 17 00:26:55.979007 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:26:55.979023 systemd[1]: Detected virtualization microsoft. Jan 17 00:26:55.979040 systemd[1]: Detected architecture x86-64. Jan 17 00:26:55.979060 systemd[1]: Detected first boot. Jan 17 00:26:55.979078 systemd[1]: Hostname set to . Jan 17 00:26:55.979095 systemd[1]: Initializing machine ID from random generator. Jan 17 00:26:55.979112 zram_generator::config[1186]: No configuration found. Jan 17 00:26:55.979130 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:26:55.979149 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:26:55.979166 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:26:55.979183 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:26:55.979201 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:26:55.979218 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:26:55.979236 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:26:55.979253 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:26:55.979273 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:26:55.979291 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:26:55.979311 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:26:55.979328 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:26:55.979346 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:26:55.979363 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:26:55.979381 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:26:55.979398 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:26:55.979415 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:26:55.979435 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:26:55.979452 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:26:55.979469 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:26:55.979486 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:26:55.979504 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:26:55.979526 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:26:55.979544 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:26:55.979559 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:26:55.979578 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:26:55.979595 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:26:55.979611 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:26:55.979630 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:26:55.979646 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:26:55.979663 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:26:55.979682 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:26:55.979705 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:26:55.979725 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:26:55.979743 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:26:55.979762 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:26:55.979781 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:26:55.979804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:26:55.979822 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:26:55.979857 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:26:55.979875 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:26:55.979890 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:26:55.979906 systemd[1]: Reached target machines.target - Containers. Jan 17 00:26:55.979920 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:26:55.979931 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:26:55.979945 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:26:55.979956 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:26:55.979966 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:26:55.979977 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:26:55.979991 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:26:55.980004 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:26:55.980014 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:26:55.980031 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:26:55.980042 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:26:55.980055 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:26:55.980066 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:26:55.980081 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:26:55.980092 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:26:55.980102 kernel: loop: module loaded Jan 17 00:26:55.980112 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:26:55.980126 kernel: ACPI: bus type drm_connector registered Jan 17 00:26:55.980137 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:26:55.980149 kernel: fuse: init (API version 7.39) Jan 17 00:26:55.980159 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:26:55.980174 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:26:55.980184 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:26:55.980219 systemd-journald[1292]: Collecting audit messages is disabled. Jan 17 00:26:55.980244 systemd[1]: Stopped verity-setup.service. Jan 17 00:26:55.980256 systemd-journald[1292]: Journal started Jan 17 00:26:55.980277 systemd-journald[1292]: Runtime Journal (/run/log/journal/13699ff5e1664991a9a169bc7edf3a81) is 8.0M, max 158.7M, 150.7M free. Jan 17 00:26:55.383088 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:26:55.411195 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 00:26:55.411588 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:26:55.989867 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:26:55.998331 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:26:55.999069 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:26:56.002740 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:26:56.006055 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:26:56.009100 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:26:56.012387 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:26:56.015321 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:26:56.018396 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:26:56.022316 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:26:56.026397 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:26:56.026598 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:26:56.031403 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:26:56.031617 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:26:56.035935 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:26:56.036244 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:26:56.040066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:26:56.040325 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:26:56.046583 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:26:56.046945 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:26:56.050770 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:26:56.051117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:26:56.054616 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:26:56.058583 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:26:56.063206 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:26:56.083701 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:26:56.097056 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:26:56.102359 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:26:56.105269 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:26:56.105323 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:26:56.110002 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:26:56.122363 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:26:56.128917 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:26:56.132357 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:26:56.137049 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:26:56.146964 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:26:56.150284 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:26:56.151707 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:26:56.157351 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:26:56.160093 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:26:56.169033 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:26:56.182045 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:26:56.183309 systemd-journald[1292]: Time spent on flushing to /var/log/journal/13699ff5e1664991a9a169bc7edf3a81 is 24.721ms for 954 entries. Jan 17 00:26:56.183309 systemd-journald[1292]: System Journal (/var/log/journal/13699ff5e1664991a9a169bc7edf3a81) is 8.0M, max 2.6G, 2.6G free. Jan 17 00:26:56.238611 systemd-journald[1292]: Received client request to flush runtime journal. Jan 17 00:26:56.192618 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:26:56.198547 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:26:56.202436 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:26:56.206957 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:26:56.215734 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:26:56.231218 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:26:56.258020 kernel: loop0: detected capacity change from 0 to 219144 Jan 17 00:26:56.244712 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:26:56.258312 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:26:56.266768 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:26:56.298136 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:26:56.302907 udevadm[1333]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 00:26:56.340110 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 17 00:26:56.342907 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 17 00:26:56.349879 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:26:56.358042 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:26:56.374888 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:26:56.383704 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:26:56.386373 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:26:56.404872 kernel: loop1: detected capacity change from 0 to 140768 Jan 17 00:26:56.442709 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:26:56.455728 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:26:56.488800 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jan 17 00:26:56.488827 systemd-tmpfiles[1344]: ACLs are not supported, ignoring. Jan 17 00:26:56.497716 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:26:56.545871 kernel: loop2: detected capacity change from 0 to 31056 Jan 17 00:26:56.690878 kernel: loop3: detected capacity change from 0 to 142488 Jan 17 00:26:56.812875 kernel: loop4: detected capacity change from 0 to 219144 Jan 17 00:26:56.855800 kernel: loop5: detected capacity change from 0 to 140768 Jan 17 00:26:56.888996 kernel: loop6: detected capacity change from 0 to 31056 Jan 17 00:26:56.905895 kernel: loop7: detected capacity change from 0 to 142488 Jan 17 00:26:56.931910 (sd-merge)[1350]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 17 00:26:56.932553 (sd-merge)[1350]: Merged extensions into '/usr'. Jan 17 00:26:56.939816 systemd[1]: Reloading requested from client PID 1322 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:26:56.939835 systemd[1]: Reloading... Jan 17 00:26:57.007875 zram_generator::config[1375]: No configuration found. Jan 17 00:26:57.168839 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:26:57.233488 systemd[1]: Reloading finished in 293 ms. Jan 17 00:26:57.259071 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:26:57.263405 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:26:57.277039 systemd[1]: Starting ensure-sysext.service... Jan 17 00:26:57.281062 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:26:57.287043 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:26:57.295678 systemd[1]: Reloading requested from client PID 1435 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:26:57.295694 systemd[1]: Reloading... Jan 17 00:26:57.364777 systemd-tmpfiles[1436]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:26:57.368371 systemd-tmpfiles[1436]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:26:57.377079 systemd-tmpfiles[1436]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:26:57.377512 systemd-tmpfiles[1436]: ACLs are not supported, ignoring. Jan 17 00:26:57.377605 systemd-tmpfiles[1436]: ACLs are not supported, ignoring. Jan 17 00:26:57.380948 systemd-udevd[1437]: Using default interface naming scheme 'v255'. Jan 17 00:26:57.393188 systemd-tmpfiles[1436]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:26:57.393206 systemd-tmpfiles[1436]: Skipping /boot Jan 17 00:26:57.411874 zram_generator::config[1460]: No configuration found. Jan 17 00:26:57.423468 systemd-tmpfiles[1436]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:26:57.423486 systemd-tmpfiles[1436]: Skipping /boot Jan 17 00:26:57.706880 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:26:57.732965 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 00:26:57.756885 kernel: hv_vmbus: registering driver hv_balloon Jan 17 00:26:57.760900 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 17 00:26:57.771932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x85 status: scsi 0x2 srb 0x6 hv 0xc0000001 Jan 17 00:26:57.800909 kernel: hv_vmbus: registering driver hyperv_fb Jan 17 00:26:57.851868 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 17 00:26:57.863862 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 17 00:26:57.874022 kernel: Console: switching to colour dummy device 80x25 Jan 17 00:26:57.893912 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 00:26:58.045702 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:26:58.046403 systemd[1]: Reloading finished in 750 ms. Jan 17 00:26:58.069771 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:26:58.075381 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:26:58.105864 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1510) Jan 17 00:26:58.123063 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:26:58.136037 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:26:58.142559 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:26:58.162311 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:26:58.178349 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:26:58.193024 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:26:58.278105 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 17 00:26:58.295219 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:26:58.295738 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:26:58.310331 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:26:58.327321 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:26:58.344663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:26:58.357468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:26:58.361637 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:26:58.362216 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:26:58.379640 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:26:58.389273 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:58.392324 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 00:26:58.400884 systemd[1]: Finished ensure-sysext.service. Jan 17 00:26:58.406856 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:26:58.412917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:26:58.413895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:26:58.419775 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:26:58.421120 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:26:58.425702 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:26:58.426215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:26:58.432781 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:26:58.433759 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:26:58.448368 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:26:58.475041 ldconfig[1317]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:26:58.487872 augenrules[1632]: No rules Jan 17 00:26:58.489394 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:26:58.494100 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:26:58.511586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 00:26:58.524113 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:26:58.527025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:26:58.527117 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:26:58.530906 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:26:58.533378 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:26:58.543429 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:26:58.546907 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:26:58.547170 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:58.561443 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:26:58.573835 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:26:58.580047 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:26:58.597671 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:26:58.603744 lvm[1644]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:26:58.633908 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:26:58.638960 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:26:58.641257 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:26:58.648705 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:26:58.656043 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:26:58.666860 lvm[1661]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:26:58.689922 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:26:58.706472 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:26:58.740317 systemd-resolved[1585]: Positive Trust Anchors: Jan 17 00:26:58.740773 systemd-resolved[1585]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:26:58.740946 systemd-resolved[1585]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:26:58.744123 systemd-networkd[1582]: lo: Link UP Jan 17 00:26:58.744354 systemd-networkd[1582]: lo: Gained carrier Jan 17 00:26:58.747070 systemd-networkd[1582]: Enumeration completed Jan 17 00:26:58.747611 systemd-networkd[1582]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:26:58.747701 systemd-networkd[1582]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:26:58.748146 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:26:58.752562 systemd-resolved[1585]: Using system hostname 'ci-4081.3.6-n-c809bb5d02'. Jan 17 00:26:58.758042 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:26:58.804880 kernel: mlx5_core 2159:00:02.0 enP8537s1: Link up Jan 17 00:26:58.828499 kernel: hv_netvsc 000d3a67-5b7c-000d-3a67-5b7c000d3a67 eth0: Data path switched to VF: enP8537s1 Jan 17 00:26:58.828058 systemd-networkd[1582]: enP8537s1: Link UP Jan 17 00:26:58.828213 systemd-networkd[1582]: eth0: Link UP Jan 17 00:26:58.828218 systemd-networkd[1582]: eth0: Gained carrier Jan 17 00:26:58.828245 systemd-networkd[1582]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:26:58.830648 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:26:58.834230 systemd[1]: Reached target network.target - Network. Jan 17 00:26:58.836003 systemd-networkd[1582]: enP8537s1: Gained carrier Jan 17 00:26:58.837230 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:26:58.841107 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:26:58.843927 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:26:58.847423 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:26:58.851114 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:26:58.854224 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:26:58.857953 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:26:58.861711 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:26:58.861832 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:26:58.864395 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:26:58.870435 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:26:58.875216 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:26:58.878933 systemd-networkd[1582]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 00:26:58.883834 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:26:58.888451 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:26:58.891663 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:26:58.894286 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:26:58.896786 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:26:58.896818 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:26:58.902934 systemd[1]: Starting chronyd.service - NTP client/server... Jan 17 00:26:58.908971 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:26:58.918042 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:26:58.926569 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:26:58.933955 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:26:58.944054 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:26:58.946823 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:26:58.946886 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 17 00:26:58.948121 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 17 00:26:58.951986 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 17 00:26:58.955493 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:26:58.968988 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:26:58.977035 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:26:58.982879 (chronyd)[1669]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 17 00:26:58.986317 jq[1673]: false Jan 17 00:26:58.992650 KVP[1677]: KVP starting; pid is:1677 Jan 17 00:26:58.994436 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:26:58.999343 chronyd[1684]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 17 00:26:59.008306 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:26:59.012277 chronyd[1684]: Timezone right/UTC failed leap second check, ignoring Jan 17 00:26:59.014277 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:26:59.012500 chronyd[1684]: Loaded seccomp filter (level 2) Jan 17 00:26:59.014937 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:26:59.032587 kernel: hv_utils: KVP IC version 4.0 Jan 17 00:26:59.024104 KVP[1677]: KVP LIC Version: 3.1 Jan 17 00:26:59.018412 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:26:59.025994 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:26:59.038497 jq[1687]: true Jan 17 00:26:59.042181 extend-filesystems[1674]: Found loop4 Jan 17 00:26:59.042181 extend-filesystems[1674]: Found loop5 Jan 17 00:26:59.042181 extend-filesystems[1674]: Found loop6 Jan 17 00:26:59.042181 extend-filesystems[1674]: Found loop7 Jan 17 00:26:59.042181 extend-filesystems[1674]: Found sda Jan 17 00:26:59.042181 extend-filesystems[1674]: Found sda1 Jan 17 00:26:59.042181 extend-filesystems[1674]: Found sda2 Jan 17 00:26:59.042181 extend-filesystems[1674]: Found sda3 Jan 17 00:26:59.042181 extend-filesystems[1674]: Found usr Jan 17 00:26:59.042181 extend-filesystems[1674]: Found sda4 Jan 17 00:26:59.042181 extend-filesystems[1674]: Found sda6 Jan 17 00:26:59.042181 extend-filesystems[1674]: Found sda7 Jan 17 00:26:59.042181 extend-filesystems[1674]: Found sda9 Jan 17 00:26:59.042181 extend-filesystems[1674]: Checking size of /dev/sda9 Jan 17 00:26:59.104509 dbus-daemon[1672]: [system] SELinux support is enabled Jan 17 00:26:59.045756 systemd[1]: Started chronyd.service - NTP client/server. Jan 17 00:26:59.118465 update_engine[1686]: I20260117 00:26:59.111505 1686 main.cc:92] Flatcar Update Engine starting Jan 17 00:26:59.069480 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:26:59.070061 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:26:59.083624 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:26:59.084471 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:26:59.107283 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:26:59.120756 extend-filesystems[1674]: Old size kept for /dev/sda9 Jan 17 00:26:59.131349 extend-filesystems[1674]: Found sr0 Jan 17 00:26:59.167143 update_engine[1686]: I20260117 00:26:59.150315 1686 update_check_scheduler.cc:74] Next update check in 3m24s Jan 17 00:26:59.134117 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:26:59.134386 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:26:59.156638 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:26:59.157993 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:26:59.176861 jq[1703]: true Jan 17 00:26:59.178271 (ntainerd)[1707]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:26:59.182777 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:26:59.182837 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:26:59.188932 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:26:59.188974 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:26:59.198985 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:26:59.213091 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:26:59.242870 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (1522) Jan 17 00:26:59.256230 coreos-metadata[1671]: Jan 17 00:26:59.255 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 00:26:59.258624 coreos-metadata[1671]: Jan 17 00:26:59.258 INFO Fetch successful Jan 17 00:26:59.258969 coreos-metadata[1671]: Jan 17 00:26:59.258 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 17 00:26:59.263049 coreos-metadata[1671]: Jan 17 00:26:59.263 INFO Fetch successful Jan 17 00:26:59.264171 coreos-metadata[1671]: Jan 17 00:26:59.264 INFO Fetching http://168.63.129.16/machine/6d79e479-da37-47f1-b74c-fcc8b5bcb97c/45f94b4e%2Db390%2D43a9%2D8224%2D98e883047a54.%5Fci%2D4081.3.6%2Dn%2Dc809bb5d02?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 17 00:26:59.265473 coreos-metadata[1671]: Jan 17 00:26:59.265 INFO Fetch successful Jan 17 00:26:59.265739 coreos-metadata[1671]: Jan 17 00:26:59.265 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 17 00:26:59.270955 tar[1700]: linux-amd64/LICENSE Jan 17 00:26:59.271264 tar[1700]: linux-amd64/helm Jan 17 00:26:59.304247 coreos-metadata[1671]: Jan 17 00:26:59.303 INFO Fetch successful Jan 17 00:26:59.335715 systemd-logind[1685]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 17 00:26:59.354041 systemd-logind[1685]: New seat seat0. Jan 17 00:26:59.361735 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:26:59.430317 bash[1761]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:26:59.435486 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:26:59.443143 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:26:59.456750 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:26:59.457399 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 00:26:59.487379 locksmithd[1723]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:26:59.526569 sshd_keygen[1710]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:26:59.567214 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:26:59.581586 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:26:59.598334 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:26:59.598551 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:26:59.613161 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:26:59.655580 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:26:59.667118 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:26:59.683542 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:26:59.691413 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:26:59.781155 containerd[1707]: time="2026-01-17T00:26:59.781052100Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:26:59.823278 containerd[1707]: time="2026-01-17T00:26:59.823215000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:59.825283 containerd[1707]: time="2026-01-17T00:26:59.825233700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:26:59.825424 containerd[1707]: time="2026-01-17T00:26:59.825408100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:26:59.825498 containerd[1707]: time="2026-01-17T00:26:59.825483800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:26:59.825769 containerd[1707]: time="2026-01-17T00:26:59.825747500Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:26:59.825909 containerd[1707]: time="2026-01-17T00:26:59.825891600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:59.826056 containerd[1707]: time="2026-01-17T00:26:59.826035300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:26:59.826154 containerd[1707]: time="2026-01-17T00:26:59.826137500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:59.826467 containerd[1707]: time="2026-01-17T00:26:59.826443200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:26:59.826545 containerd[1707]: time="2026-01-17T00:26:59.826531700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:59.826615 containerd[1707]: time="2026-01-17T00:26:59.826600200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:26:59.826686 containerd[1707]: time="2026-01-17T00:26:59.826671500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:59.826879 containerd[1707]: time="2026-01-17T00:26:59.826837700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:59.827214 containerd[1707]: time="2026-01-17T00:26:59.827190600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:26:59.827478 containerd[1707]: time="2026-01-17T00:26:59.827454800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:26:59.827740 containerd[1707]: time="2026-01-17T00:26:59.827545400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:26:59.827740 containerd[1707]: time="2026-01-17T00:26:59.827654700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:26:59.827740 containerd[1707]: time="2026-01-17T00:26:59.827710700Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:26:59.843976 containerd[1707]: time="2026-01-17T00:26:59.843948200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:26:59.844121 containerd[1707]: time="2026-01-17T00:26:59.844101600Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:26:59.844306 containerd[1707]: time="2026-01-17T00:26:59.844242700Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:26:59.844306 containerd[1707]: time="2026-01-17T00:26:59.844280100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:26:59.844484 containerd[1707]: time="2026-01-17T00:26:59.844431300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:26:59.844773 containerd[1707]: time="2026-01-17T00:26:59.844667200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:26:59.845256 containerd[1707]: time="2026-01-17T00:26:59.845232600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:26:59.845514 containerd[1707]: time="2026-01-17T00:26:59.845494600Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:26:59.845672 containerd[1707]: time="2026-01-17T00:26:59.845585700Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:26:59.845672 containerd[1707]: time="2026-01-17T00:26:59.845609300Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:26:59.845672 containerd[1707]: time="2026-01-17T00:26:59.845627600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:26:59.845856 containerd[1707]: time="2026-01-17T00:26:59.845794000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:26:59.845856 containerd[1707]: time="2026-01-17T00:26:59.845819300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:26:59.846095 containerd[1707]: time="2026-01-17T00:26:59.845949400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:26:59.846095 containerd[1707]: time="2026-01-17T00:26:59.845977200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:26:59.846095 containerd[1707]: time="2026-01-17T00:26:59.845996200Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:26:59.846095 containerd[1707]: time="2026-01-17T00:26:59.846028700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:26:59.846095 containerd[1707]: time="2026-01-17T00:26:59.846048400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:26:59.846095 containerd[1707]: time="2026-01-17T00:26:59.846077300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846482 containerd[1707]: time="2026-01-17T00:26:59.846342100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846482 containerd[1707]: time="2026-01-17T00:26:59.846368700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846482 containerd[1707]: time="2026-01-17T00:26:59.846388400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846482 containerd[1707]: time="2026-01-17T00:26:59.846421300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846482 containerd[1707]: time="2026-01-17T00:26:59.846439700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846482 containerd[1707]: time="2026-01-17T00:26:59.846456000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846931 containerd[1707]: time="2026-01-17T00:26:59.846721300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846931 containerd[1707]: time="2026-01-17T00:26:59.846746500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846931 containerd[1707]: time="2026-01-17T00:26:59.846780100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846931 containerd[1707]: time="2026-01-17T00:26:59.846818600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846931 containerd[1707]: time="2026-01-17T00:26:59.846839500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846931 containerd[1707]: time="2026-01-17T00:26:59.846880800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.846931 containerd[1707]: time="2026-01-17T00:26:59.846907400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:26:59.847540 containerd[1707]: time="2026-01-17T00:26:59.847269400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.847540 containerd[1707]: time="2026-01-17T00:26:59.847294400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.847540 containerd[1707]: time="2026-01-17T00:26:59.847310200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:26:59.847540 containerd[1707]: time="2026-01-17T00:26:59.847462900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:26:59.847540 containerd[1707]: time="2026-01-17T00:26:59.847493100Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:26:59.847540 containerd[1707]: time="2026-01-17T00:26:59.847510800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:26:59.848017 containerd[1707]: time="2026-01-17T00:26:59.847528600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:26:59.848017 containerd[1707]: time="2026-01-17T00:26:59.847800100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.848017 containerd[1707]: time="2026-01-17T00:26:59.847818400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:26:59.848017 containerd[1707]: time="2026-01-17T00:26:59.847832100Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:26:59.848017 containerd[1707]: time="2026-01-17T00:26:59.847864500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:26:59.848774 containerd[1707]: time="2026-01-17T00:26:59.848534200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:26:59.848774 containerd[1707]: time="2026-01-17T00:26:59.848642700Z" level=info msg="Connect containerd service" Jan 17 00:26:59.848774 containerd[1707]: time="2026-01-17T00:26:59.848703000Z" level=info msg="using legacy CRI server" Jan 17 00:26:59.848774 containerd[1707]: time="2026-01-17T00:26:59.848714300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:26:59.849515 containerd[1707]: time="2026-01-17T00:26:59.849229000Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:26:59.850548 containerd[1707]: time="2026-01-17T00:26:59.850450200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:26:59.850883 containerd[1707]: time="2026-01-17T00:26:59.850816500Z" level=info msg="Start subscribing containerd event" Jan 17 00:26:59.851048 containerd[1707]: time="2026-01-17T00:26:59.850886700Z" level=info msg="Start recovering state" Jan 17 00:26:59.851048 containerd[1707]: time="2026-01-17T00:26:59.850967300Z" level=info msg="Start event monitor" Jan 17 00:26:59.851048 containerd[1707]: time="2026-01-17T00:26:59.850992200Z" level=info msg="Start snapshots syncer" Jan 17 00:26:59.851048 containerd[1707]: time="2026-01-17T00:26:59.851006700Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:26:59.851048 containerd[1707]: time="2026-01-17T00:26:59.851017300Z" level=info msg="Start streaming server" Jan 17 00:26:59.851374 containerd[1707]: time="2026-01-17T00:26:59.851277800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:26:59.851503 containerd[1707]: time="2026-01-17T00:26:59.851478200Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:26:59.851743 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:26:59.857341 containerd[1707]: time="2026-01-17T00:26:59.857321700Z" level=info msg="containerd successfully booted in 0.077319s" Jan 17 00:27:00.005799 tar[1700]: linux-amd64/README.md Jan 17 00:27:00.017542 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:27:00.280094 systemd-networkd[1582]: eth0: Gained IPv6LL Jan 17 00:27:00.283781 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:27:00.288303 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:27:00.295110 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:00.300719 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:27:00.305982 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 17 00:27:00.343312 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 17 00:27:00.358193 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:27:01.159235 waagent[1812]: 2026-01-17T00:27:01.159120Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 17 00:27:01.163004 waagent[1812]: 2026-01-17T00:27:01.162925Z INFO Daemon Daemon OS: flatcar 4081.3.6 Jan 17 00:27:01.165708 waagent[1812]: 2026-01-17T00:27:01.165631Z INFO Daemon Daemon Python: 3.11.9 Jan 17 00:27:01.168284 waagent[1812]: 2026-01-17T00:27:01.168013Z INFO Daemon Daemon Run daemon Jan 17 00:27:01.171241 waagent[1812]: 2026-01-17T00:27:01.170349Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.6' Jan 17 00:27:01.175212 waagent[1812]: 2026-01-17T00:27:01.175069Z INFO Daemon Daemon Using waagent for provisioning Jan 17 00:27:01.178535 waagent[1812]: 2026-01-17T00:27:01.178485Z INFO Daemon Daemon Activate resource disk Jan 17 00:27:01.181233 waagent[1812]: 2026-01-17T00:27:01.181175Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 17 00:27:01.190069 waagent[1812]: 2026-01-17T00:27:01.189999Z INFO Daemon Daemon Found device: None Jan 17 00:27:01.193642 waagent[1812]: 2026-01-17T00:27:01.192796Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 17 00:27:01.198325 waagent[1812]: 2026-01-17T00:27:01.197528Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 17 00:27:01.205391 waagent[1812]: 2026-01-17T00:27:01.205309Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:27:01.208816 waagent[1812]: 2026-01-17T00:27:01.208756Z INFO Daemon Daemon Running default provisioning handler Jan 17 00:27:01.223868 waagent[1812]: 2026-01-17T00:27:01.221750Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 17 00:27:01.229328 waagent[1812]: 2026-01-17T00:27:01.229269Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 17 00:27:01.235863 waagent[1812]: 2026-01-17T00:27:01.233831Z INFO Daemon Daemon cloud-init is enabled: False Jan 17 00:27:01.237536 waagent[1812]: 2026-01-17T00:27:01.236450Z INFO Daemon Daemon Copying ovf-env.xml Jan 17 00:27:01.283022 waagent[1812]: 2026-01-17T00:27:01.282920Z INFO Daemon Daemon Successfully mounted dvd Jan 17 00:27:01.301282 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 17 00:27:01.304738 waagent[1812]: 2026-01-17T00:27:01.303817Z INFO Daemon Daemon Detect protocol endpoint Jan 17 00:27:01.306873 waagent[1812]: 2026-01-17T00:27:01.306774Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 00:27:01.311601 waagent[1812]: 2026-01-17T00:27:01.310094Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 17 00:27:01.314320 waagent[1812]: 2026-01-17T00:27:01.313239Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 17 00:27:01.317351 waagent[1812]: 2026-01-17T00:27:01.316241Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 17 00:27:01.320019 waagent[1812]: 2026-01-17T00:27:01.318683Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 17 00:27:01.339403 waagent[1812]: 2026-01-17T00:27:01.339337Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 17 00:27:01.342859 waagent[1812]: 2026-01-17T00:27:01.342807Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 17 00:27:01.345747 waagent[1812]: 2026-01-17T00:27:01.345665Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 17 00:27:01.442066 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:01.446724 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:27:01.450555 (kubelet)[1833]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:27:01.451632 systemd[1]: Startup finished in 1.017s (kernel) + 7.627s (initrd) + 7.036s (userspace) = 15.680s. Jan 17 00:27:01.609175 waagent[1812]: 2026-01-17T00:27:01.609066Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 17 00:27:01.615274 waagent[1812]: 2026-01-17T00:27:01.610437Z INFO Daemon Daemon Forcing an update of the goal state. Jan 17 00:27:01.617374 waagent[1812]: 2026-01-17T00:27:01.617314Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:27:01.629991 waagent[1812]: 2026-01-17T00:27:01.629752Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.177 Jan 17 00:27:01.634030 waagent[1812]: 2026-01-17T00:27:01.633969Z INFO Daemon Jan 17 00:27:01.637211 waagent[1812]: 2026-01-17T00:27:01.637143Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 448ee000-85f6-4762-88c1-1adad2ef5b7e eTag: 12324230295748884835 source: Fabric] Jan 17 00:27:01.643571 waagent[1812]: 2026-01-17T00:27:01.643504Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 17 00:27:01.648194 waagent[1812]: 2026-01-17T00:27:01.648139Z INFO Daemon Jan 17 00:27:01.650052 waagent[1812]: 2026-01-17T00:27:01.649995Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:27:01.660659 waagent[1812]: 2026-01-17T00:27:01.660616Z INFO Daemon Daemon Downloading artifacts profile blob Jan 17 00:27:01.694046 login[1792]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 00:27:01.694113 login[1791]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 00:27:01.708764 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:27:01.716182 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:27:01.725158 systemd-logind[1685]: New session 1 of user core. Jan 17 00:27:01.736350 systemd-logind[1685]: New session 2 of user core. Jan 17 00:27:01.754953 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:27:01.766230 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:27:01.774115 waagent[1812]: 2026-01-17T00:27:01.774026Z INFO Daemon Downloaded certificate {'thumbprint': '5FCE0F32B5674435F7508D2DC907DC1B6CC3CBBA', 'hasPrivateKey': True} Jan 17 00:27:01.783152 waagent[1812]: 2026-01-17T00:27:01.780705Z INFO Daemon Fetch goal state completed Jan 17 00:27:01.785285 (systemd)[1848]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:27:01.794927 waagent[1812]: 2026-01-17T00:27:01.791339Z INFO Daemon Daemon Starting provisioning Jan 17 00:27:01.795910 waagent[1812]: 2026-01-17T00:27:01.795831Z INFO Daemon Daemon Handle ovf-env.xml. Jan 17 00:27:01.796972 waagent[1812]: 2026-01-17T00:27:01.796929Z INFO Daemon Daemon Set hostname [ci-4081.3.6-n-c809bb5d02] Jan 17 00:27:01.807936 waagent[1812]: 2026-01-17T00:27:01.807389Z INFO Daemon Daemon Publish hostname [ci-4081.3.6-n-c809bb5d02] Jan 17 00:27:01.811086 waagent[1812]: 2026-01-17T00:27:01.810985Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 17 00:27:01.815775 waagent[1812]: 2026-01-17T00:27:01.815357Z INFO Daemon Daemon Primary interface is [eth0] Jan 17 00:27:01.841700 systemd-networkd[1582]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:27:01.841712 systemd-networkd[1582]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:27:01.841828 systemd-networkd[1582]: eth0: DHCP lease lost Jan 17 00:27:01.844954 systemd-networkd[1582]: eth0: DHCPv6 lease lost Jan 17 00:27:01.848663 waagent[1812]: 2026-01-17T00:27:01.845450Z INFO Daemon Daemon Create user account if not exists Jan 17 00:27:01.849169 waagent[1812]: 2026-01-17T00:27:01.849062Z INFO Daemon Daemon User core already exists, skip useradd Jan 17 00:27:01.861586 waagent[1812]: 2026-01-17T00:27:01.850674Z INFO Daemon Daemon Configure sudoer Jan 17 00:27:01.861586 waagent[1812]: 2026-01-17T00:27:01.852017Z INFO Daemon Daemon Configure sshd Jan 17 00:27:01.861586 waagent[1812]: 2026-01-17T00:27:01.853733Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 17 00:27:01.861586 waagent[1812]: 2026-01-17T00:27:01.854368Z INFO Daemon Daemon Deploy ssh public key. Jan 17 00:27:01.894011 systemd-networkd[1582]: eth0: DHCPv4 address 10.200.8.17/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 00:27:02.006178 systemd[1848]: Queued start job for default target default.target. Jan 17 00:27:02.011086 systemd[1848]: Created slice app.slice - User Application Slice. Jan 17 00:27:02.011124 systemd[1848]: Reached target paths.target - Paths. Jan 17 00:27:02.011144 systemd[1848]: Reached target timers.target - Timers. Jan 17 00:27:02.013211 systemd[1848]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:27:02.028918 systemd[1848]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:27:02.029071 systemd[1848]: Reached target sockets.target - Sockets. Jan 17 00:27:02.029090 systemd[1848]: Reached target basic.target - Basic System. Jan 17 00:27:02.029143 systemd[1848]: Reached target default.target - Main User Target. Jan 17 00:27:02.029182 systemd[1848]: Startup finished in 224ms. Jan 17 00:27:02.029343 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:27:02.036052 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:27:02.037168 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:27:02.233328 kubelet[1833]: E0117 00:27:02.233230 1833 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:27:02.236205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:27:02.236445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:27:12.279987 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:27:12.286489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:12.407704 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:12.412910 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:27:13.111228 kubelet[1895]: E0117 00:27:13.111167 1895 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:27:13.115447 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:27:13.115672 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:27:22.822891 chronyd[1684]: Selected source PHC0 Jan 17 00:27:23.280009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:27:23.286105 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:23.406738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:23.411837 (kubelet)[1909]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:27:24.034445 kubelet[1909]: E0117 00:27:24.034381 1909 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:27:24.037149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:27:24.037367 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:27:31.934116 waagent[1812]: 2026-01-17T00:27:31.934041Z INFO Daemon Daemon Provisioning complete Jan 17 00:27:31.946161 waagent[1812]: 2026-01-17T00:27:31.946097Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 17 00:27:31.953588 waagent[1812]: 2026-01-17T00:27:31.947540Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 17 00:27:31.953588 waagent[1812]: 2026-01-17T00:27:31.948494Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 17 00:27:32.078561 waagent[1916]: 2026-01-17T00:27:32.078437Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 17 00:27:32.079081 waagent[1916]: 2026-01-17T00:27:32.078637Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.6 Jan 17 00:27:32.079081 waagent[1916]: 2026-01-17T00:27:32.078734Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 17 00:27:32.095416 waagent[1916]: 2026-01-17T00:27:32.095331Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.6; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 17 00:27:32.095647 waagent[1916]: 2026-01-17T00:27:32.095595Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:27:32.095752 waagent[1916]: 2026-01-17T00:27:32.095706Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:27:32.103351 waagent[1916]: 2026-01-17T00:27:32.103277Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 00:27:32.113563 waagent[1916]: 2026-01-17T00:27:32.113500Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.177 Jan 17 00:27:32.114112 waagent[1916]: 2026-01-17T00:27:32.114052Z INFO ExtHandler Jan 17 00:27:32.114216 waagent[1916]: 2026-01-17T00:27:32.114156Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 95786cfa-cf2e-433b-9107-a8e992572fe2 eTag: 12324230295748884835 source: Fabric] Jan 17 00:27:32.114549 waagent[1916]: 2026-01-17T00:27:32.114494Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:27:32.115178 waagent[1916]: 2026-01-17T00:27:32.115119Z INFO ExtHandler Jan 17 00:27:32.115257 waagent[1916]: 2026-01-17T00:27:32.115214Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 17 00:27:32.118692 waagent[1916]: 2026-01-17T00:27:32.118642Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:27:32.177962 waagent[1916]: 2026-01-17T00:27:32.177864Z INFO ExtHandler Downloaded certificate {'thumbprint': '5FCE0F32B5674435F7508D2DC907DC1B6CC3CBBA', 'hasPrivateKey': True} Jan 17 00:27:32.178535 waagent[1916]: 2026-01-17T00:27:32.178475Z INFO ExtHandler Fetch goal state completed Jan 17 00:27:32.193495 waagent[1916]: 2026-01-17T00:27:32.193375Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1916 Jan 17 00:27:32.193604 waagent[1916]: 2026-01-17T00:27:32.193561Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 17 00:27:32.195262 waagent[1916]: 2026-01-17T00:27:32.195201Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.6', '', 'Flatcar Container Linux by Kinvolk'] Jan 17 00:27:32.195640 waagent[1916]: 2026-01-17T00:27:32.195587Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 17 00:27:32.205724 waagent[1916]: 2026-01-17T00:27:32.205684Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 17 00:27:32.205943 waagent[1916]: 2026-01-17T00:27:32.205890Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 17 00:27:32.212905 waagent[1916]: 2026-01-17T00:27:32.212828Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 17 00:27:32.220235 systemd[1]: Reloading requested from client PID 1929 ('systemctl') (unit waagent.service)... Jan 17 00:27:32.220255 systemd[1]: Reloading... Jan 17 00:27:32.298877 zram_generator::config[1959]: No configuration found. Jan 17 00:27:32.435182 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:27:32.517703 systemd[1]: Reloading finished in 296 ms. Jan 17 00:27:32.547879 waagent[1916]: 2026-01-17T00:27:32.541710Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 17 00:27:32.551797 systemd[1]: Reloading requested from client PID 2019 ('systemctl') (unit waagent.service)... Jan 17 00:27:32.551816 systemd[1]: Reloading... Jan 17 00:27:32.641879 zram_generator::config[2050]: No configuration found. Jan 17 00:27:32.773051 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:27:32.855600 systemd[1]: Reloading finished in 303 ms. Jan 17 00:27:32.887876 waagent[1916]: 2026-01-17T00:27:32.887371Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 17 00:27:32.887876 waagent[1916]: 2026-01-17T00:27:32.887624Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 17 00:27:32.980623 waagent[1916]: 2026-01-17T00:27:32.980477Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 17 00:27:32.981304 waagent[1916]: 2026-01-17T00:27:32.981234Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 17 00:27:32.982177 waagent[1916]: 2026-01-17T00:27:32.982120Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 17 00:27:32.982318 waagent[1916]: 2026-01-17T00:27:32.982271Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:27:32.982418 waagent[1916]: 2026-01-17T00:27:32.982381Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:27:32.982674 waagent[1916]: 2026-01-17T00:27:32.982627Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 17 00:27:32.982921 waagent[1916]: 2026-01-17T00:27:32.982834Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 17 00:27:32.982921 waagent[1916]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 17 00:27:32.982921 waagent[1916]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 17 00:27:32.982921 waagent[1916]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 17 00:27:32.982921 waagent[1916]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:27:32.982921 waagent[1916]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:27:32.982921 waagent[1916]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 00:27:32.983370 waagent[1916]: 2026-01-17T00:27:32.983305Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 17 00:27:32.984229 waagent[1916]: 2026-01-17T00:27:32.984177Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 17 00:27:32.984519 waagent[1916]: 2026-01-17T00:27:32.984452Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 17 00:27:32.984946 waagent[1916]: 2026-01-17T00:27:32.984895Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 00:27:32.985432 waagent[1916]: 2026-01-17T00:27:32.985384Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 00:27:32.985609 waagent[1916]: 2026-01-17T00:27:32.985561Z INFO EnvHandler ExtHandler Configure routes Jan 17 00:27:32.985694 waagent[1916]: 2026-01-17T00:27:32.985656Z INFO EnvHandler ExtHandler Gateway:None Jan 17 00:27:32.985774 waagent[1916]: 2026-01-17T00:27:32.985736Z INFO EnvHandler ExtHandler Routes:None Jan 17 00:27:32.986388 waagent[1916]: 2026-01-17T00:27:32.986329Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 17 00:27:32.986497 waagent[1916]: 2026-01-17T00:27:32.986456Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 17 00:27:32.987070 waagent[1916]: 2026-01-17T00:27:32.987004Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 17 00:27:32.997673 waagent[1916]: 2026-01-17T00:27:32.996535Z INFO ExtHandler ExtHandler Jan 17 00:27:32.998238 waagent[1916]: 2026-01-17T00:27:32.997913Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: f4222f1f-0214-4c1d-9cd0-0c5ad3934a8d correlation 8b0516a6-c703-4d14-bdb3-677b645002ff created: 2026-01-17T00:26:27.814949Z] Jan 17 00:27:32.999257 waagent[1916]: 2026-01-17T00:27:32.999202Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:27:33.000175 waagent[1916]: 2026-01-17T00:27:33.000118Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jan 17 00:27:33.005353 waagent[1916]: 2026-01-17T00:27:33.005297Z INFO MonitorHandler ExtHandler Network interfaces: Jan 17 00:27:33.005353 waagent[1916]: Executing ['ip', '-a', '-o', 'link']: Jan 17 00:27:33.005353 waagent[1916]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 17 00:27:33.005353 waagent[1916]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:67:5b:7c brd ff:ff:ff:ff:ff:ff Jan 17 00:27:33.005353 waagent[1916]: 3: enP8537s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:67:5b:7c brd ff:ff:ff:ff:ff:ff\ altname enP8537p0s2 Jan 17 00:27:33.005353 waagent[1916]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 17 00:27:33.005353 waagent[1916]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 17 00:27:33.005353 waagent[1916]: 2: eth0 inet 10.200.8.17/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 17 00:27:33.005353 waagent[1916]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 17 00:27:33.005353 waagent[1916]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 17 00:27:33.005353 waagent[1916]: 2: eth0 inet6 fe80::20d:3aff:fe67:5b7c/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 00:27:33.036684 waagent[1916]: 2026-01-17T00:27:33.036603Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 50953117-DFD8-45B5-A15E-418E5DAEE0A2;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 17 00:27:33.052629 waagent[1916]: 2026-01-17T00:27:33.052553Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 17 00:27:33.052629 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:27:33.052629 waagent[1916]: pkts bytes target prot opt in out source destination Jan 17 00:27:33.052629 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:27:33.052629 waagent[1916]: pkts bytes target prot opt in out source destination Jan 17 00:27:33.052629 waagent[1916]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:27:33.052629 waagent[1916]: pkts bytes target prot opt in out source destination Jan 17 00:27:33.052629 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:27:33.052629 waagent[1916]: 3 534 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:27:33.052629 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:27:33.056329 waagent[1916]: 2026-01-17T00:27:33.056261Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 17 00:27:33.056329 waagent[1916]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:27:33.056329 waagent[1916]: pkts bytes target prot opt in out source destination Jan 17 00:27:33.056329 waagent[1916]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:27:33.056329 waagent[1916]: pkts bytes target prot opt in out source destination Jan 17 00:27:33.056329 waagent[1916]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 00:27:33.056329 waagent[1916]: pkts bytes target prot opt in out source destination Jan 17 00:27:33.056329 waagent[1916]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 00:27:33.056329 waagent[1916]: 3 534 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 00:27:33.056329 waagent[1916]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 00:27:33.056743 waagent[1916]: 2026-01-17T00:27:33.056677Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 17 00:27:34.279942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 00:27:34.285087 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:34.401639 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:34.414182 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:27:34.455148 kubelet[2152]: E0117 00:27:34.455062 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:27:34.458000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:27:34.458252 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:27:43.863493 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:27:43.869149 systemd[1]: Started sshd@0-10.200.8.17:22-10.200.16.10:39514.service - OpenSSH per-connection server daemon (10.200.16.10:39514). Jan 17 00:27:44.510587 sshd[2161]: Accepted publickey for core from 10.200.16.10 port 39514 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:27:44.512250 sshd[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:44.513237 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 00:27:44.520100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:44.524241 systemd-logind[1685]: New session 3 of user core. Jan 17 00:27:44.529100 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:27:44.710142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:44.715068 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:27:44.771578 update_engine[1686]: I20260117 00:27:44.771380 1686 update_attempter.cc:509] Updating boot flags... Jan 17 00:27:45.074179 systemd[1]: Started sshd@1-10.200.8.17:22-10.200.16.10:39520.service - OpenSSH per-connection server daemon (10.200.16.10:39520). Jan 17 00:27:45.321788 kubelet[2172]: E0117 00:27:45.321703 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:27:45.325034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:27:45.325259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:27:45.413885 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2196) Jan 17 00:27:45.529891 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 33 scanned by (udev-worker) (2188) Jan 17 00:27:45.714250 sshd[2182]: Accepted publickey for core from 10.200.16.10 port 39520 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:27:45.715917 sshd[2182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:45.720779 systemd-logind[1685]: New session 4 of user core. Jan 17 00:27:45.730015 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:27:45.906448 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 17 00:27:46.165388 sshd[2182]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:46.169594 systemd[1]: sshd@1-10.200.8.17:22-10.200.16.10:39520.service: Deactivated successfully. Jan 17 00:27:46.171692 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:27:46.172444 systemd-logind[1685]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:27:46.173503 systemd-logind[1685]: Removed session 4. Jan 17 00:27:46.278002 systemd[1]: Started sshd@2-10.200.8.17:22-10.200.16.10:39526.service - OpenSSH per-connection server daemon (10.200.16.10:39526). Jan 17 00:27:46.912683 sshd[2255]: Accepted publickey for core from 10.200.16.10 port 39526 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:27:46.914315 sshd[2255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:46.919738 systemd-logind[1685]: New session 5 of user core. Jan 17 00:27:46.926041 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:27:47.363397 sshd[2255]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:47.366651 systemd[1]: sshd@2-10.200.8.17:22-10.200.16.10:39526.service: Deactivated successfully. Jan 17 00:27:47.368912 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:27:47.370586 systemd-logind[1685]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:27:47.371611 systemd-logind[1685]: Removed session 5. Jan 17 00:27:47.481425 systemd[1]: Started sshd@3-10.200.8.17:22-10.200.16.10:39536.service - OpenSSH per-connection server daemon (10.200.16.10:39536). Jan 17 00:27:48.111950 sshd[2262]: Accepted publickey for core from 10.200.16.10 port 39536 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:27:48.113514 sshd[2262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:48.117899 systemd-logind[1685]: New session 6 of user core. Jan 17 00:27:48.127015 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:27:48.564927 sshd[2262]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:48.569124 systemd[1]: sshd@3-10.200.8.17:22-10.200.16.10:39536.service: Deactivated successfully. Jan 17 00:27:48.571091 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:27:48.571791 systemd-logind[1685]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:27:48.572766 systemd-logind[1685]: Removed session 6. Jan 17 00:27:48.677185 systemd[1]: Started sshd@4-10.200.8.17:22-10.200.16.10:39542.service - OpenSSH per-connection server daemon (10.200.16.10:39542). Jan 17 00:27:49.310636 sshd[2269]: Accepted publickey for core from 10.200.16.10 port 39542 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:27:49.312235 sshd[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:49.317495 systemd-logind[1685]: New session 7 of user core. Jan 17 00:27:49.323033 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:27:49.695524 sudo[2272]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:27:49.696207 sudo[2272]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:27:49.708413 sudo[2272]: pam_unix(sudo:session): session closed for user root Jan 17 00:27:49.811067 sshd[2269]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:49.815670 systemd[1]: sshd@4-10.200.8.17:22-10.200.16.10:39542.service: Deactivated successfully. Jan 17 00:27:49.817642 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:27:49.818500 systemd-logind[1685]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:27:49.819593 systemd-logind[1685]: Removed session 7. Jan 17 00:27:49.928467 systemd[1]: Started sshd@5-10.200.8.17:22-10.200.16.10:43810.service - OpenSSH per-connection server daemon (10.200.16.10:43810). Jan 17 00:27:50.561277 sshd[2277]: Accepted publickey for core from 10.200.16.10 port 43810 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:27:50.562985 sshd[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:50.567938 systemd-logind[1685]: New session 8 of user core. Jan 17 00:27:50.573227 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:27:50.912651 sudo[2281]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:27:50.913089 sudo[2281]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:27:50.916816 sudo[2281]: pam_unix(sudo:session): session closed for user root Jan 17 00:27:50.921997 sudo[2280]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:27:50.922361 sudo[2280]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:27:50.934177 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:27:50.937966 auditctl[2284]: No rules Jan 17 00:27:50.939177 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:27:50.939424 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:27:50.944619 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:27:50.968775 augenrules[2302]: No rules Jan 17 00:27:50.970433 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:27:50.971693 sudo[2280]: pam_unix(sudo:session): session closed for user root Jan 17 00:27:51.082795 sshd[2277]: pam_unix(sshd:session): session closed for user core Jan 17 00:27:51.087550 systemd[1]: sshd@5-10.200.8.17:22-10.200.16.10:43810.service: Deactivated successfully. Jan 17 00:27:51.089588 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:27:51.090325 systemd-logind[1685]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:27:51.091399 systemd-logind[1685]: Removed session 8. Jan 17 00:27:51.195420 systemd[1]: Started sshd@6-10.200.8.17:22-10.200.16.10:43812.service - OpenSSH per-connection server daemon (10.200.16.10:43812). Jan 17 00:27:51.839421 sshd[2310]: Accepted publickey for core from 10.200.16.10 port 43812 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:27:51.841084 sshd[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:27:51.846376 systemd-logind[1685]: New session 9 of user core. Jan 17 00:27:51.852043 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:27:52.191044 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:27:52.191450 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:27:53.806216 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:27:53.808129 (dockerd)[2330]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:27:54.298101 dockerd[2330]: time="2026-01-17T00:27:54.298025699Z" level=info msg="Starting up" Jan 17 00:27:54.560547 dockerd[2330]: time="2026-01-17T00:27:54.560411597Z" level=info msg="Loading containers: start." Jan 17 00:27:54.672075 kernel: Initializing XFRM netlink socket Jan 17 00:27:54.743005 systemd-networkd[1582]: docker0: Link UP Jan 17 00:27:54.770120 dockerd[2330]: time="2026-01-17T00:27:54.770070031Z" level=info msg="Loading containers: done." Jan 17 00:27:54.791044 dockerd[2330]: time="2026-01-17T00:27:54.790997393Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:27:54.791227 dockerd[2330]: time="2026-01-17T00:27:54.791112196Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:27:54.791274 dockerd[2330]: time="2026-01-17T00:27:54.791243899Z" level=info msg="Daemon has completed initialization" Jan 17 00:27:54.854093 dockerd[2330]: time="2026-01-17T00:27:54.852797459Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:27:54.853388 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:27:55.529896 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 00:27:55.535102 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:27:55.655008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:27:55.667238 (kubelet)[2474]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:27:56.291823 containerd[1707]: time="2026-01-17T00:27:56.291769024Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 17 00:27:56.328420 kubelet[2474]: E0117 00:27:56.310027 2474 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:27:56.312516 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:27:56.312747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:27:57.145206 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322182460.mount: Deactivated successfully. Jan 17 00:27:59.046444 containerd[1707]: time="2026-01-17T00:27:59.046375446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:59.048646 containerd[1707]: time="2026-01-17T00:27:59.048443802Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=27068081" Jan 17 00:27:59.050887 containerd[1707]: time="2026-01-17T00:27:59.050835166Z" level=info msg="ImageCreate event name:\"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:59.058814 containerd[1707]: time="2026-01-17T00:27:59.058729578Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:27:59.060116 containerd[1707]: time="2026-01-17T00:27:59.059448998Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"27064672\" in 2.767624572s" Jan 17 00:27:59.060116 containerd[1707]: time="2026-01-17T00:27:59.059499799Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c\"" Jan 17 00:27:59.060473 containerd[1707]: time="2026-01-17T00:27:59.060427224Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 17 00:28:00.685360 containerd[1707]: time="2026-01-17T00:28:00.685235286Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:00.687898 containerd[1707]: time="2026-01-17T00:28:00.687745453Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=21162448" Jan 17 00:28:00.691419 containerd[1707]: time="2026-01-17T00:28:00.691364251Z" level=info msg="ImageCreate event name:\"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:00.696859 containerd[1707]: time="2026-01-17T00:28:00.696791496Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:00.698318 containerd[1707]: time="2026-01-17T00:28:00.697897526Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"22819474\" in 1.637325999s" Jan 17 00:28:00.698318 containerd[1707]: time="2026-01-17T00:28:00.697938627Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942\"" Jan 17 00:28:00.698894 containerd[1707]: time="2026-01-17T00:28:00.698776350Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 17 00:28:02.190882 containerd[1707]: time="2026-01-17T00:28:02.190806682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:02.193414 containerd[1707]: time="2026-01-17T00:28:02.193340334Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=15725935" Jan 17 00:28:02.196782 containerd[1707]: time="2026-01-17T00:28:02.196719904Z" level=info msg="ImageCreate event name:\"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:02.201942 containerd[1707]: time="2026-01-17T00:28:02.201875610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:02.203149 containerd[1707]: time="2026-01-17T00:28:02.202914731Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"17382979\" in 1.503898675s" Jan 17 00:28:02.203149 containerd[1707]: time="2026-01-17T00:28:02.202969332Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78\"" Jan 17 00:28:02.203544 containerd[1707]: time="2026-01-17T00:28:02.203513743Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 17 00:28:03.532532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480955410.mount: Deactivated successfully. Jan 17 00:28:03.923387 containerd[1707]: time="2026-01-17T00:28:03.923221160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:03.925663 containerd[1707]: time="2026-01-17T00:28:03.925595710Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=25965301" Jan 17 00:28:03.928639 containerd[1707]: time="2026-01-17T00:28:03.928606873Z" level=info msg="ImageCreate event name:\"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:03.933004 containerd[1707]: time="2026-01-17T00:28:03.932923165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:03.933802 containerd[1707]: time="2026-01-17T00:28:03.933640480Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"25964312\" in 1.730087535s" Jan 17 00:28:03.933802 containerd[1707]: time="2026-01-17T00:28:03.933681281Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691\"" Jan 17 00:28:03.934544 containerd[1707]: time="2026-01-17T00:28:03.934509698Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 17 00:28:04.593197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1081389667.mount: Deactivated successfully. Jan 17 00:28:06.136613 containerd[1707]: time="2026-01-17T00:28:06.136550616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:06.139298 containerd[1707]: time="2026-01-17T00:28:06.139066769Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388015" Jan 17 00:28:06.143757 containerd[1707]: time="2026-01-17T00:28:06.143696067Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:06.148266 containerd[1707]: time="2026-01-17T00:28:06.148215162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:06.149584 containerd[1707]: time="2026-01-17T00:28:06.149405987Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.214761386s" Jan 17 00:28:06.149584 containerd[1707]: time="2026-01-17T00:28:06.149446688Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Jan 17 00:28:06.150311 containerd[1707]: time="2026-01-17T00:28:06.150284806Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 17 00:28:06.529755 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 00:28:06.538283 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:06.658651 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:06.663571 (kubelet)[2616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:28:06.702779 kubelet[2616]: E0117 00:28:06.702697 2616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:28:06.705316 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:28:06.705536 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:28:07.434014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1438537219.mount: Deactivated successfully. Jan 17 00:28:07.454117 containerd[1707]: time="2026-01-17T00:28:07.454068648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:07.457043 containerd[1707]: time="2026-01-17T00:28:07.456806406Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321226" Jan 17 00:28:07.461314 containerd[1707]: time="2026-01-17T00:28:07.460018274Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:07.464989 containerd[1707]: time="2026-01-17T00:28:07.464150761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:07.464989 containerd[1707]: time="2026-01-17T00:28:07.464803975Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 1.314482469s" Jan 17 00:28:07.464989 containerd[1707]: time="2026-01-17T00:28:07.464858276Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Jan 17 00:28:07.465776 containerd[1707]: time="2026-01-17T00:28:07.465755595Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 17 00:28:08.084154 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount473679380.mount: Deactivated successfully. Jan 17 00:28:11.309441 containerd[1707]: time="2026-01-17T00:28:11.309375455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:11.312771 containerd[1707]: time="2026-01-17T00:28:11.312709534Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=74166822" Jan 17 00:28:11.315942 containerd[1707]: time="2026-01-17T00:28:11.315881209Z" level=info msg="ImageCreate event name:\"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:11.320625 containerd[1707]: time="2026-01-17T00:28:11.320570920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:11.322025 containerd[1707]: time="2026-01-17T00:28:11.321800349Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"74311308\" in 3.855884551s" Jan 17 00:28:11.322025 containerd[1707]: time="2026-01-17T00:28:11.321860051Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115\"" Jan 17 00:28:14.624144 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:14.637719 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:14.678311 systemd[1]: Reloading requested from client PID 2710 ('systemctl') (unit session-9.scope)... Jan 17 00:28:14.678337 systemd[1]: Reloading... Jan 17 00:28:14.818872 zram_generator::config[2750]: No configuration found. Jan 17 00:28:14.936964 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:28:15.018297 systemd[1]: Reloading finished in 339 ms. Jan 17 00:28:15.074733 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 00:28:15.074863 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 00:28:15.075176 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:15.081274 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:22.738576 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:22.749233 (kubelet)[2820]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:28:22.798420 kubelet[2820]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:28:22.798420 kubelet[2820]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:28:22.798918 kubelet[2820]: I0117 00:28:22.798466 2820 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:28:22.937472 kubelet[2820]: I0117 00:28:22.937419 2820 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:28:22.937472 kubelet[2820]: I0117 00:28:22.937453 2820 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:28:22.938332 kubelet[2820]: I0117 00:28:22.938309 2820 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:28:22.938415 kubelet[2820]: I0117 00:28:22.938342 2820 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:28:22.938705 kubelet[2820]: I0117 00:28:22.938682 2820 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:28:24.029201 kubelet[2820]: E0117 00:28:24.028805 2820 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:28:24.031417 kubelet[2820]: I0117 00:28:24.031217 2820 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:28:24.034737 kubelet[2820]: E0117 00:28:24.034689 2820 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:28:24.034828 kubelet[2820]: I0117 00:28:24.034762 2820 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:28:24.039028 kubelet[2820]: I0117 00:28:24.038729 2820 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:28:24.040275 kubelet[2820]: I0117 00:28:24.040224 2820 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:28:24.040490 kubelet[2820]: I0117 00:28:24.040279 2820 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-c809bb5d02","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:28:24.040660 kubelet[2820]: I0117 00:28:24.040498 2820 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:28:24.040660 kubelet[2820]: I0117 00:28:24.040512 2820 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:28:24.040660 kubelet[2820]: I0117 00:28:24.040633 2820 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:28:24.080725 kubelet[2820]: I0117 00:28:24.080682 2820 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:28:24.083443 kubelet[2820]: I0117 00:28:24.083409 2820 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:28:24.084006 kubelet[2820]: I0117 00:28:24.083460 2820 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:28:24.084006 kubelet[2820]: I0117 00:28:24.083500 2820 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:28:24.084006 kubelet[2820]: I0117 00:28:24.083517 2820 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:28:24.086265 kubelet[2820]: E0117 00:28:24.086199 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-c809bb5d02&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:28:24.087217 kubelet[2820]: I0117 00:28:24.086912 2820 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:28:24.088455 kubelet[2820]: I0117 00:28:24.087832 2820 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:28:24.088455 kubelet[2820]: I0117 00:28:24.087900 2820 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:28:24.088455 kubelet[2820]: W0117 00:28:24.087964 2820 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:28:24.090801 kubelet[2820]: I0117 00:28:24.090776 2820 server.go:1262] "Started kubelet" Jan 17 00:28:24.091007 kubelet[2820]: E0117 00:28:24.090981 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:28:24.102466 kubelet[2820]: I0117 00:28:24.101385 2820 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:28:24.102466 kubelet[2820]: E0117 00:28:24.099900 2820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-c809bb5d02.188b5d27c6a3464e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-c809bb5d02,UID:ci-4081.3.6-n-c809bb5d02,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-c809bb5d02,},FirstTimestamp:2026-01-17 00:28:24.090748494 +0000 UTC m=+1.337483551,LastTimestamp:2026-01-17 00:28:24.090748494 +0000 UTC m=+1.337483551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-c809bb5d02,}" Jan 17 00:28:24.103517 kubelet[2820]: I0117 00:28:24.103156 2820 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:28:24.105763 kubelet[2820]: I0117 00:28:24.105725 2820 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:28:24.107618 kubelet[2820]: I0117 00:28:24.107091 2820 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:28:24.113474 kubelet[2820]: I0117 00:28:24.113439 2820 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:28:24.113625 kubelet[2820]: I0117 00:28:24.113607 2820 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:28:24.113978 kubelet[2820]: I0117 00:28:24.113960 2820 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:28:24.114464 kubelet[2820]: I0117 00:28:24.114443 2820 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:28:24.116252 kubelet[2820]: E0117 00:28:24.116228 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:24.117662 kubelet[2820]: I0117 00:28:24.117637 2820 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:28:24.117737 kubelet[2820]: I0117 00:28:24.117725 2820 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:28:24.118344 kubelet[2820]: E0117 00:28:24.118296 2820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-c809bb5d02?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="200ms" Jan 17 00:28:24.118483 kubelet[2820]: E0117 00:28:24.118458 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:28:24.121878 kubelet[2820]: I0117 00:28:24.121619 2820 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:28:24.121878 kubelet[2820]: I0117 00:28:24.121639 2820 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:28:24.121878 kubelet[2820]: I0117 00:28:24.121741 2820 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:28:24.140338 kubelet[2820]: E0117 00:28:24.140311 2820 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:28:24.166896 kubelet[2820]: I0117 00:28:24.166630 2820 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:28:24.166896 kubelet[2820]: I0117 00:28:24.166647 2820 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:28:24.166896 kubelet[2820]: I0117 00:28:24.166665 2820 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:28:24.217259 kubelet[2820]: E0117 00:28:24.217144 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:24.318067 kubelet[2820]: E0117 00:28:24.317895 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:24.319575 kubelet[2820]: E0117 00:28:24.319393 2820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-c809bb5d02?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="400ms" Jan 17 00:28:24.328397 kubelet[2820]: I0117 00:28:24.328330 2820 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:28:24.329618 kubelet[2820]: I0117 00:28:24.329525 2820 policy_none.go:49] "None policy: Start" Jan 17 00:28:24.329618 kubelet[2820]: I0117 00:28:24.329553 2820 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:28:24.329618 kubelet[2820]: I0117 00:28:24.329570 2820 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:28:24.332475 kubelet[2820]: I0117 00:28:24.332234 2820 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:28:24.332475 kubelet[2820]: I0117 00:28:24.332288 2820 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:28:24.332475 kubelet[2820]: I0117 00:28:24.332324 2820 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:28:24.333068 kubelet[2820]: E0117 00:28:24.333001 2820 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:28:24.334177 kubelet[2820]: E0117 00:28:24.334047 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:28:24.375498 kubelet[2820]: I0117 00:28:24.375451 2820 policy_none.go:47] "Start" Jan 17 00:28:24.381161 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:28:24.397118 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:28:24.401337 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:28:24.406592 kubelet[2820]: E0117 00:28:24.406558 2820 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:28:24.406817 kubelet[2820]: I0117 00:28:24.406796 2820 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:28:24.406909 kubelet[2820]: I0117 00:28:24.406817 2820 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:28:24.409205 kubelet[2820]: I0117 00:28:24.407626 2820 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:28:24.410166 kubelet[2820]: E0117 00:28:24.409939 2820 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:28:24.410166 kubelet[2820]: E0117 00:28:24.409998 2820 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:24.510733 kubelet[2820]: I0117 00:28:24.510681 2820 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.511192 kubelet[2820]: E0117 00:28:24.511152 2820 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.518658 kubelet[2820]: I0117 00:28:24.518623 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29646b3397cd909e106a6d9e53dae6b6-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-c809bb5d02\" (UID: \"29646b3397cd909e106a6d9e53dae6b6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.518658 kubelet[2820]: I0117 00:28:24.518654 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29646b3397cd909e106a6d9e53dae6b6-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-c809bb5d02\" (UID: \"29646b3397cd909e106a6d9e53dae6b6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.518783 kubelet[2820]: I0117 00:28:24.518677 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29646b3397cd909e106a6d9e53dae6b6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-c809bb5d02\" (UID: \"29646b3397cd909e106a6d9e53dae6b6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.714041 kubelet[2820]: I0117 00:28:24.713912 2820 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.714400 kubelet[2820]: E0117 00:28:24.714362 2820 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.720052 kubelet[2820]: E0117 00:28:24.720005 2820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-c809bb5d02?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="800ms" Jan 17 00:28:24.837015 systemd[1]: Created slice kubepods-burstable-pod29646b3397cd909e106a6d9e53dae6b6.slice - libcontainer container kubepods-burstable-pod29646b3397cd909e106a6d9e53dae6b6.slice. Jan 17 00:28:24.848505 kubelet[2820]: E0117 00:28:24.848470 2820 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.921220 kubelet[2820]: I0117 00:28:24.920997 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fa6f39459edf9ff982b81a8d4757b86-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-c809bb5d02\" (UID: \"0fa6f39459edf9ff982b81a8d4757b86\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.921220 kubelet[2820]: I0117 00:28:24.921057 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6f39459edf9ff982b81a8d4757b86-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-c809bb5d02\" (UID: \"0fa6f39459edf9ff982b81a8d4757b86\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.921220 kubelet[2820]: I0117 00:28:24.921085 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fa6f39459edf9ff982b81a8d4757b86-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-c809bb5d02\" (UID: \"0fa6f39459edf9ff982b81a8d4757b86\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.921220 kubelet[2820]: I0117 00:28:24.921137 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fa6f39459edf9ff982b81a8d4757b86-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-c809bb5d02\" (UID: \"0fa6f39459edf9ff982b81a8d4757b86\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.921220 kubelet[2820]: I0117 00:28:24.921154 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fa6f39459edf9ff982b81a8d4757b86-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-c809bb5d02\" (UID: \"0fa6f39459edf9ff982b81a8d4757b86\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:24.963761 kubelet[2820]: E0117 00:28:24.963699 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:28:25.087771 containerd[1707]: time="2026-01-17T00:28:25.086980608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-c809bb5d02,Uid:29646b3397cd909e106a6d9e53dae6b6,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:25.095741 systemd[1]: Created slice kubepods-burstable-pod0fa6f39459edf9ff982b81a8d4757b86.slice - libcontainer container kubepods-burstable-pod0fa6f39459edf9ff982b81a8d4757b86.slice. Jan 17 00:28:25.098808 kubelet[2820]: E0117 00:28:25.098449 2820 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:25.116488 kubelet[2820]: I0117 00:28:25.116452 2820 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:25.116874 kubelet[2820]: E0117 00:28:25.116829 2820 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:25.123108 kubelet[2820]: I0117 00:28:25.123076 2820 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65318e5110e961c78d6ca78396d191c8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-c809bb5d02\" (UID: \"65318e5110e961c78d6ca78396d191c8\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:25.135905 containerd[1707]: time="2026-01-17T00:28:25.135445150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-c809bb5d02,Uid:0fa6f39459edf9ff982b81a8d4757b86,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:25.148292 systemd[1]: Created slice kubepods-burstable-pod65318e5110e961c78d6ca78396d191c8.slice - libcontainer container kubepods-burstable-pod65318e5110e961c78d6ca78396d191c8.slice. Jan 17 00:28:25.150379 kubelet[2820]: E0117 00:28:25.150353 2820 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:25.525946 kubelet[2820]: E0117 00:28:25.340805 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:28:25.525946 kubelet[2820]: E0117 00:28:25.482665 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-c809bb5d02&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:28:25.525946 kubelet[2820]: E0117 00:28:25.520623 2820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-c809bb5d02?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="1.6s" Jan 17 00:28:25.603384 kubelet[2820]: E0117 00:28:25.603326 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:28:25.918918 kubelet[2820]: I0117 00:28:25.918879 2820 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:25.919341 kubelet[2820]: E0117 00:28:25.919301 2820 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:26.072502 kubelet[2820]: E0117 00:28:26.072393 2820 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:28:30.438640 kubelet[2820]: E0117 00:28:27.122115 2820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-c809bb5d02?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="3.2s" Jan 17 00:28:30.438640 kubelet[2820]: E0117 00:28:27.432014 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:28:30.438640 kubelet[2820]: E0117 00:28:27.520671 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:28:30.438640 kubelet[2820]: I0117 00:28:27.522054 2820 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:30.438640 kubelet[2820]: E0117 00:28:27.522387 2820 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:30.438640 kubelet[2820]: E0117 00:28:27.632303 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-c809bb5d02&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:28:30.439305 kubelet[2820]: E0117 00:28:27.861378 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:28:30.439305 kubelet[2820]: E0117 00:28:30.201815 2820 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.200.8.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 17 00:28:30.439305 kubelet[2820]: E0117 00:28:30.323120 2820 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-n-c809bb5d02?timeout=10s\": dial tcp 10.200.8.17:6443: connect: connection refused" interval="6.4s" Jan 17 00:28:30.483118 containerd[1707]: time="2026-01-17T00:28:30.483070622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-c809bb5d02,Uid:65318e5110e961c78d6ca78396d191c8,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:30.635632 kubelet[2820]: E0117 00:28:30.635579 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.200.8.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 17 00:28:32.677097 kubelet[2820]: I0117 00:28:30.724536 2820 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:32.677097 kubelet[2820]: E0117 00:28:30.724923 2820 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.200.8.17:6443/api/v1/nodes\": dial tcp 10.200.8.17:6443: connect: connection refused" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:32.677097 kubelet[2820]: E0117 00:28:31.383046 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.200.8.17:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 17 00:28:32.968955 kubelet[2820]: E0117 00:28:32.968736 2820 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.17:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.17:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-n-c809bb5d02.188b5d27c6a3464e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-n-c809bb5d02,UID:ci-4081.3.6-n-c809bb5d02,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-n-c809bb5d02,},FirstTimestamp:2026-01-17 00:28:24.090748494 +0000 UTC m=+1.337483551,LastTimestamp:2026-01-17 00:28:24.090748494 +0000 UTC m=+1.337483551,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-n-c809bb5d02,}" Jan 17 00:28:33.283531 kubelet[2820]: E0117 00:28:33.283474 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.200.8.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-n-c809bb5d02&limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 17 00:28:33.799661 kubelet[2820]: E0117 00:28:33.799609 2820 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.200.8.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.17:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 17 00:28:34.410937 kubelet[2820]: E0117 00:28:34.410897 2820 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:35.580889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1370524759.mount: Deactivated successfully. Jan 17 00:28:35.606327 containerd[1707]: time="2026-01-17T00:28:35.606272888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:28:35.609326 containerd[1707]: time="2026-01-17T00:28:35.609270647Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 00:28:35.612005 containerd[1707]: time="2026-01-17T00:28:35.611966200Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:28:35.614532 containerd[1707]: time="2026-01-17T00:28:35.614494050Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:28:35.617050 containerd[1707]: time="2026-01-17T00:28:35.616951698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:28:35.620046 containerd[1707]: time="2026-01-17T00:28:35.620010758Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:28:35.622770 containerd[1707]: time="2026-01-17T00:28:35.622692311Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:28:35.629766 containerd[1707]: time="2026-01-17T00:28:35.629721948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:28:35.632874 containerd[1707]: time="2026-01-17T00:28:35.631736188Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 10.496202536s" Jan 17 00:28:35.633602 containerd[1707]: time="2026-01-17T00:28:35.633564524Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 10.546471014s" Jan 17 00:28:35.636073 containerd[1707]: time="2026-01-17T00:28:35.636040572Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 5.152857148s" Jan 17 00:28:35.916999 containerd[1707]: time="2026-01-17T00:28:35.916593776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:35.916999 containerd[1707]: time="2026-01-17T00:28:35.916662177Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:35.916999 containerd[1707]: time="2026-01-17T00:28:35.916677077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:35.916999 containerd[1707]: time="2026-01-17T00:28:35.916774079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:35.922672 containerd[1707]: time="2026-01-17T00:28:35.922056383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:35.923215 containerd[1707]: time="2026-01-17T00:28:35.923124404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:35.923215 containerd[1707]: time="2026-01-17T00:28:35.923151804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:35.924223 containerd[1707]: time="2026-01-17T00:28:35.923457010Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:35.927882 containerd[1707]: time="2026-01-17T00:28:35.927512390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:35.927882 containerd[1707]: time="2026-01-17T00:28:35.927574791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:35.927882 containerd[1707]: time="2026-01-17T00:28:35.927599992Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:35.927882 containerd[1707]: time="2026-01-17T00:28:35.927703394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:35.956071 systemd[1]: Started cri-containerd-900d3ae9a785b97a2322e7b102e2e7bb6e0bf835cb19218fb497a2536f4fb21b.scope - libcontainer container 900d3ae9a785b97a2322e7b102e2e7bb6e0bf835cb19218fb497a2536f4fb21b. Jan 17 00:28:35.970013 systemd[1]: Started cri-containerd-50f028ad13c1149f297b0acdf30f4f4ad0c7e6a5407e032dadbc8aec6be7f2f1.scope - libcontainer container 50f028ad13c1149f297b0acdf30f4f4ad0c7e6a5407e032dadbc8aec6be7f2f1. Jan 17 00:28:35.972251 systemd[1]: Started cri-containerd-555a8b9039c78fef6cfb64f464f1749895ad3ae1dc22de74eec6bcfa4a09ca47.scope - libcontainer container 555a8b9039c78fef6cfb64f464f1749895ad3ae1dc22de74eec6bcfa4a09ca47. Jan 17 00:28:36.054949 containerd[1707]: time="2026-01-17T00:28:36.054894989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-n-c809bb5d02,Uid:65318e5110e961c78d6ca78396d191c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"555a8b9039c78fef6cfb64f464f1749895ad3ae1dc22de74eec6bcfa4a09ca47\"" Jan 17 00:28:36.058742 containerd[1707]: time="2026-01-17T00:28:36.058694863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-n-c809bb5d02,Uid:29646b3397cd909e106a6d9e53dae6b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"900d3ae9a785b97a2322e7b102e2e7bb6e0bf835cb19218fb497a2536f4fb21b\"" Jan 17 00:28:36.060765 containerd[1707]: time="2026-01-17T00:28:36.060424397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-n-c809bb5d02,Uid:0fa6f39459edf9ff982b81a8d4757b86,Namespace:kube-system,Attempt:0,} returns sandbox id \"50f028ad13c1149f297b0acdf30f4f4ad0c7e6a5407e032dadbc8aec6be7f2f1\"" Jan 17 00:28:36.066820 containerd[1707]: time="2026-01-17T00:28:36.066689520Z" level=info msg="CreateContainer within sandbox \"555a8b9039c78fef6cfb64f464f1749895ad3ae1dc22de74eec6bcfa4a09ca47\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:28:36.071632 containerd[1707]: time="2026-01-17T00:28:36.071591916Z" level=info msg="CreateContainer within sandbox \"50f028ad13c1149f297b0acdf30f4f4ad0c7e6a5407e032dadbc8aec6be7f2f1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:28:36.076074 containerd[1707]: time="2026-01-17T00:28:36.075974102Z" level=info msg="CreateContainer within sandbox \"900d3ae9a785b97a2322e7b102e2e7bb6e0bf835cb19218fb497a2536f4fb21b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:28:36.111476 containerd[1707]: time="2026-01-17T00:28:36.111415197Z" level=info msg="CreateContainer within sandbox \"555a8b9039c78fef6cfb64f464f1749895ad3ae1dc22de74eec6bcfa4a09ca47\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"814d6fde2b0fb10c695fc0b2e1f1beaeb7a8d8c7ccb079bcbed13a6664bc562e\"" Jan 17 00:28:36.112358 containerd[1707]: time="2026-01-17T00:28:36.112178512Z" level=info msg="StartContainer for \"814d6fde2b0fb10c695fc0b2e1f1beaeb7a8d8c7ccb079bcbed13a6664bc562e\"" Jan 17 00:28:36.123683 containerd[1707]: time="2026-01-17T00:28:36.123638837Z" level=info msg="CreateContainer within sandbox \"50f028ad13c1149f297b0acdf30f4f4ad0c7e6a5407e032dadbc8aec6be7f2f1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c398ece19956f1288cc76876024e9b1b91594f29211c680e49043c9ba2430b44\"" Jan 17 00:28:36.125392 containerd[1707]: time="2026-01-17T00:28:36.124367051Z" level=info msg="StartContainer for \"c398ece19956f1288cc76876024e9b1b91594f29211c680e49043c9ba2430b44\"" Jan 17 00:28:36.134734 containerd[1707]: time="2026-01-17T00:28:36.134690254Z" level=info msg="CreateContainer within sandbox \"900d3ae9a785b97a2322e7b102e2e7bb6e0bf835cb19218fb497a2536f4fb21b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2267e5a50276b0c79e017810b297870bb7b41f7bbe6dc7808a6cb95946757316\"" Jan 17 00:28:36.135463 containerd[1707]: time="2026-01-17T00:28:36.135435168Z" level=info msg="StartContainer for \"2267e5a50276b0c79e017810b297870bb7b41f7bbe6dc7808a6cb95946757316\"" Jan 17 00:28:36.157064 systemd[1]: Started cri-containerd-814d6fde2b0fb10c695fc0b2e1f1beaeb7a8d8c7ccb079bcbed13a6664bc562e.scope - libcontainer container 814d6fde2b0fb10c695fc0b2e1f1beaeb7a8d8c7ccb079bcbed13a6664bc562e. Jan 17 00:28:36.175070 systemd[1]: Started cri-containerd-c398ece19956f1288cc76876024e9b1b91594f29211c680e49043c9ba2430b44.scope - libcontainer container c398ece19956f1288cc76876024e9b1b91594f29211c680e49043c9ba2430b44. Jan 17 00:28:36.193031 systemd[1]: Started cri-containerd-2267e5a50276b0c79e017810b297870bb7b41f7bbe6dc7808a6cb95946757316.scope - libcontainer container 2267e5a50276b0c79e017810b297870bb7b41f7bbe6dc7808a6cb95946757316. Jan 17 00:28:36.275872 containerd[1707]: time="2026-01-17T00:28:36.275143409Z" level=info msg="StartContainer for \"814d6fde2b0fb10c695fc0b2e1f1beaeb7a8d8c7ccb079bcbed13a6664bc562e\" returns successfully" Jan 17 00:28:36.276234 containerd[1707]: time="2026-01-17T00:28:36.275192210Z" level=info msg="StartContainer for \"2267e5a50276b0c79e017810b297870bb7b41f7bbe6dc7808a6cb95946757316\" returns successfully" Jan 17 00:28:36.287752 containerd[1707]: time="2026-01-17T00:28:36.287702155Z" level=info msg="StartContainer for \"c398ece19956f1288cc76876024e9b1b91594f29211c680e49043c9ba2430b44\" returns successfully" Jan 17 00:28:36.364801 kubelet[2820]: E0117 00:28:36.364768 2820 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:36.374172 kubelet[2820]: E0117 00:28:36.373692 2820 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:36.375028 kubelet[2820]: E0117 00:28:36.374985 2820 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:37.128896 kubelet[2820]: I0117 00:28:37.128314 2820 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:37.376913 kubelet[2820]: E0117 00:28:37.376815 2820 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:37.379433 kubelet[2820]: E0117 00:28:37.379045 2820 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:37.379614 kubelet[2820]: E0117 00:28:37.379591 2820 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:38.379237 kubelet[2820]: E0117 00:28:38.379197 2820 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:39.068150 kubelet[2820]: E0117 00:28:39.068091 2820 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-n-c809bb5d02\" not found" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:39.254214 kubelet[2820]: I0117 00:28:39.254169 2820 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:39.254457 kubelet[2820]: E0117 00:28:39.254227 2820 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-n-c809bb5d02\": node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:39.399468 kubelet[2820]: E0117 00:28:39.399292 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:39.499810 kubelet[2820]: E0117 00:28:39.499761 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:39.601031 kubelet[2820]: E0117 00:28:39.600900 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:39.701648 kubelet[2820]: E0117 00:28:39.701503 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:39.802292 kubelet[2820]: E0117 00:28:39.802180 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:39.902982 kubelet[2820]: E0117 00:28:39.902923 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:40.003685 kubelet[2820]: E0117 00:28:40.003546 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:40.103898 kubelet[2820]: E0117 00:28:40.103811 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:40.204400 kubelet[2820]: E0117 00:28:40.204340 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:40.305186 kubelet[2820]: E0117 00:28:40.305128 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:40.406375 kubelet[2820]: E0117 00:28:40.406327 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:40.507330 kubelet[2820]: E0117 00:28:40.507267 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:40.607873 kubelet[2820]: E0117 00:28:40.607687 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:40.708701 kubelet[2820]: E0117 00:28:40.708605 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:40.809725 kubelet[2820]: E0117 00:28:40.809465 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:40.910306 kubelet[2820]: E0117 00:28:40.910160 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:41.010962 kubelet[2820]: E0117 00:28:41.010914 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:41.111282 kubelet[2820]: E0117 00:28:41.111236 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:41.212322 kubelet[2820]: E0117 00:28:41.211834 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:41.312875 kubelet[2820]: E0117 00:28:41.312786 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:41.413166 kubelet[2820]: E0117 00:28:41.413128 2820 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:41.518238 kubelet[2820]: I0117 00:28:41.518104 2820 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:41.529240 kubelet[2820]: I0117 00:28:41.528820 2820 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:28:41.529240 kubelet[2820]: I0117 00:28:41.529008 2820 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:41.536435 kubelet[2820]: I0117 00:28:41.536205 2820 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:28:41.536435 kubelet[2820]: I0117 00:28:41.536354 2820 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:41.542681 kubelet[2820]: I0117 00:28:41.542657 2820 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:28:42.105239 kubelet[2820]: I0117 00:28:42.105178 2820 apiserver.go:52] "Watching apiserver" Jan 17 00:28:42.117948 kubelet[2820]: I0117 00:28:42.117896 2820 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:28:42.208750 systemd[1]: Reloading requested from client PID 3105 ('systemctl') (unit session-9.scope)... Jan 17 00:28:42.208770 systemd[1]: Reloading... Jan 17 00:28:42.323873 zram_generator::config[3144]: No configuration found. Jan 17 00:28:42.464297 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:28:42.565154 systemd[1]: Reloading finished in 354 ms. Jan 17 00:28:42.614769 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:42.632266 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:28:42.632566 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:42.637140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:28:42.785244 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:28:42.796181 (kubelet)[3212]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:28:42.844139 kubelet[3212]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:28:42.844139 kubelet[3212]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:28:42.844599 kubelet[3212]: I0117 00:28:42.844194 3212 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:28:42.849829 kubelet[3212]: I0117 00:28:42.849786 3212 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 17 00:28:42.849829 kubelet[3212]: I0117 00:28:42.849812 3212 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:28:42.849829 kubelet[3212]: I0117 00:28:42.849855 3212 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 17 00:28:42.850053 kubelet[3212]: I0117 00:28:42.849864 3212 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:28:42.850104 kubelet[3212]: I0117 00:28:42.850098 3212 server.go:956] "Client rotation is on, will bootstrap in background" Jan 17 00:28:42.851266 kubelet[3212]: I0117 00:28:42.851238 3212 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 17 00:28:42.856583 kubelet[3212]: I0117 00:28:42.855020 3212 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:28:42.859824 kubelet[3212]: E0117 00:28:42.859799 3212 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:28:42.860064 kubelet[3212]: I0117 00:28:42.860049 3212 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 17 00:28:42.864797 kubelet[3212]: I0117 00:28:42.864777 3212 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 17 00:28:42.865111 kubelet[3212]: I0117 00:28:42.865076 3212 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:28:42.865279 kubelet[3212]: I0117 00:28:42.865109 3212 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-n-c809bb5d02","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:28:42.865410 kubelet[3212]: I0117 00:28:42.865288 3212 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:28:42.865410 kubelet[3212]: I0117 00:28:42.865303 3212 container_manager_linux.go:306] "Creating device plugin manager" Jan 17 00:28:42.865410 kubelet[3212]: I0117 00:28:42.865333 3212 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 17 00:28:42.866558 kubelet[3212]: I0117 00:28:42.866537 3212 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:28:42.866743 kubelet[3212]: I0117 00:28:42.866723 3212 kubelet.go:475] "Attempting to sync node with API server" Jan 17 00:28:42.866743 kubelet[3212]: I0117 00:28:42.866742 3212 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:28:42.867873 kubelet[3212]: I0117 00:28:42.867613 3212 kubelet.go:387] "Adding apiserver pod source" Jan 17 00:28:42.867873 kubelet[3212]: I0117 00:28:42.867670 3212 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:28:42.870934 kubelet[3212]: I0117 00:28:42.870910 3212 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:28:42.872377 kubelet[3212]: I0117 00:28:42.872354 3212 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 17 00:28:42.872487 kubelet[3212]: I0117 00:28:42.872397 3212 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 17 00:28:42.878507 kubelet[3212]: I0117 00:28:42.878481 3212 server.go:1262] "Started kubelet" Jan 17 00:28:42.880166 kubelet[3212]: I0117 00:28:42.879989 3212 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:28:42.891617 kubelet[3212]: I0117 00:28:42.889340 3212 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:28:42.891617 kubelet[3212]: I0117 00:28:42.891430 3212 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 17 00:28:42.894866 kubelet[3212]: I0117 00:28:42.893064 3212 server.go:310] "Adding debug handlers to kubelet server" Jan 17 00:28:42.900161 kubelet[3212]: I0117 00:28:42.899066 3212 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 17 00:28:42.900161 kubelet[3212]: E0117 00:28:42.899441 3212 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081.3.6-n-c809bb5d02\" not found" Jan 17 00:28:42.902232 kubelet[3212]: I0117 00:28:42.900922 3212 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:28:42.902232 kubelet[3212]: I0117 00:28:42.900978 3212 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 17 00:28:42.902232 kubelet[3212]: I0117 00:28:42.901171 3212 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:28:42.902232 kubelet[3212]: I0117 00:28:42.901502 3212 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:28:42.906150 kubelet[3212]: I0117 00:28:42.906131 3212 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 17 00:28:42.906276 kubelet[3212]: I0117 00:28:42.906262 3212 reconciler.go:29] "Reconciler: start to sync state" Jan 17 00:28:42.910149 kubelet[3212]: I0117 00:28:42.910123 3212 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 17 00:28:42.910232 kubelet[3212]: I0117 00:28:42.910155 3212 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 17 00:28:42.910232 kubelet[3212]: I0117 00:28:42.910179 3212 kubelet.go:2427] "Starting kubelet main sync loop" Jan 17 00:28:42.910314 kubelet[3212]: E0117 00:28:42.910233 3212 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:28:42.927196 kubelet[3212]: I0117 00:28:42.926989 3212 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:28:42.927812 kubelet[3212]: E0117 00:28:42.927789 3212 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:28:42.932460 kubelet[3212]: I0117 00:28:42.932421 3212 factory.go:223] Registration of the containerd container factory successfully Jan 17 00:28:42.932460 kubelet[3212]: I0117 00:28:42.932446 3212 factory.go:223] Registration of the systemd container factory successfully Jan 17 00:28:42.983886 kubelet[3212]: I0117 00:28:42.983693 3212 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:28:42.983886 kubelet[3212]: I0117 00:28:42.983714 3212 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:28:42.983886 kubelet[3212]: I0117 00:28:42.983737 3212 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:28:42.984145 kubelet[3212]: I0117 00:28:42.983907 3212 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:28:42.984145 kubelet[3212]: I0117 00:28:42.983919 3212 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:28:42.984145 kubelet[3212]: I0117 00:28:42.983939 3212 policy_none.go:49] "None policy: Start" Jan 17 00:28:42.984145 kubelet[3212]: I0117 00:28:42.983950 3212 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 17 00:28:42.984145 kubelet[3212]: I0117 00:28:42.983963 3212 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 17 00:28:42.984145 kubelet[3212]: I0117 00:28:42.984077 3212 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 17 00:28:42.984145 kubelet[3212]: I0117 00:28:42.984088 3212 policy_none.go:47] "Start" Jan 17 00:28:42.990288 kubelet[3212]: E0117 00:28:42.990256 3212 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 17 00:28:42.990631 kubelet[3212]: I0117 00:28:42.990450 3212 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:28:42.990631 kubelet[3212]: I0117 00:28:42.990469 3212 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:28:42.991191 kubelet[3212]: I0117 00:28:42.990963 3212 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:28:42.994886 kubelet[3212]: E0117 00:28:42.993254 3212 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:28:43.011678 kubelet[3212]: I0117 00:28:43.011645 3212 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.012674 kubelet[3212]: I0117 00:28:43.012645 3212 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.012946 kubelet[3212]: I0117 00:28:43.012917 3212 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.024741 kubelet[3212]: I0117 00:28:43.024716 3212 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:28:43.024956 kubelet[3212]: E0117 00:28:43.024927 3212 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-n-c809bb5d02\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.025393 kubelet[3212]: I0117 00:28:43.025372 3212 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:28:43.025560 kubelet[3212]: E0117 00:28:43.025532 3212 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-n-c809bb5d02\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.025639 kubelet[3212]: I0117 00:28:43.025409 3212 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:28:43.025639 kubelet[3212]: E0117 00:28:43.025592 3212 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-c809bb5d02\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.093325 kubelet[3212]: I0117 00:28:43.093192 3212 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.107628 kubelet[3212]: I0117 00:28:43.107084 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/29646b3397cd909e106a6d9e53dae6b6-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-n-c809bb5d02\" (UID: \"29646b3397cd909e106a6d9e53dae6b6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.107628 kubelet[3212]: I0117 00:28:43.107127 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/29646b3397cd909e106a6d9e53dae6b6-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-n-c809bb5d02\" (UID: \"29646b3397cd909e106a6d9e53dae6b6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.107628 kubelet[3212]: I0117 00:28:43.107157 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/29646b3397cd909e106a6d9e53dae6b6-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-n-c809bb5d02\" (UID: \"29646b3397cd909e106a6d9e53dae6b6\") " pod="kube-system/kube-apiserver-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.107628 kubelet[3212]: I0117 00:28:43.107180 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fa6f39459edf9ff982b81a8d4757b86-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-c809bb5d02\" (UID: \"0fa6f39459edf9ff982b81a8d4757b86\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.107628 kubelet[3212]: I0117 00:28:43.107203 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0fa6f39459edf9ff982b81a8d4757b86-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-n-c809bb5d02\" (UID: \"0fa6f39459edf9ff982b81a8d4757b86\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.108079 kubelet[3212]: I0117 00:28:43.107241 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fa6f39459edf9ff982b81a8d4757b86-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-n-c809bb5d02\" (UID: \"0fa6f39459edf9ff982b81a8d4757b86\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.108079 kubelet[3212]: I0117 00:28:43.107294 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fa6f39459edf9ff982b81a8d4757b86-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-n-c809bb5d02\" (UID: \"0fa6f39459edf9ff982b81a8d4757b86\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.108079 kubelet[3212]: I0117 00:28:43.107319 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/65318e5110e961c78d6ca78396d191c8-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-n-c809bb5d02\" (UID: \"65318e5110e961c78d6ca78396d191c8\") " pod="kube-system/kube-scheduler-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.108079 kubelet[3212]: I0117 00:28:43.107343 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0fa6f39459edf9ff982b81a8d4757b86-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-n-c809bb5d02\" (UID: \"0fa6f39459edf9ff982b81a8d4757b86\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.111872 kubelet[3212]: I0117 00:28:43.109751 3212 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.111872 kubelet[3212]: I0117 00:28:43.109876 3212 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.869961 kubelet[3212]: I0117 00:28:43.869918 3212 apiserver.go:52] "Watching apiserver" Jan 17 00:28:43.906505 kubelet[3212]: I0117 00:28:43.906459 3212 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 17 00:28:43.966523 kubelet[3212]: I0117 00:28:43.966487 3212 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:43.980866 kubelet[3212]: I0117 00:28:43.980571 3212 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Jan 17 00:28:43.980866 kubelet[3212]: E0117 00:28:43.980638 3212 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-n-c809bb5d02\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.6-n-c809bb5d02" Jan 17 00:28:44.004527 kubelet[3212]: I0117 00:28:44.004323 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-n-c809bb5d02" podStartSLOduration=3.004302075 podStartE2EDuration="3.004302075s" podCreationTimestamp="2026-01-17 00:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:28:43.992756637 +0000 UTC m=+1.191192708" watchObservedRunningTime="2026-01-17 00:28:44.004302075 +0000 UTC m=+1.202738046" Jan 17 00:28:44.015793 kubelet[3212]: I0117 00:28:44.015726 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-n-c809bb5d02" podStartSLOduration=3.01570681 podStartE2EDuration="3.01570681s" podCreationTimestamp="2026-01-17 00:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:28:44.014884393 +0000 UTC m=+1.213320364" watchObservedRunningTime="2026-01-17 00:28:44.01570681 +0000 UTC m=+1.214142781" Jan 17 00:28:44.016052 kubelet[3212]: I0117 00:28:44.015868 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-n-c809bb5d02" podStartSLOduration=3.015859613 podStartE2EDuration="3.015859613s" podCreationTimestamp="2026-01-17 00:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:28:44.004276774 +0000 UTC m=+1.202712745" watchObservedRunningTime="2026-01-17 00:28:44.015859613 +0000 UTC m=+1.214295684" Jan 17 00:28:46.858739 kubelet[3212]: I0117 00:28:46.858693 3212 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:28:46.861865 containerd[1707]: time="2026-01-17T00:28:46.861402167Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:28:46.862256 kubelet[3212]: I0117 00:28:46.861671 3212 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:28:48.042201 systemd[1]: Created slice kubepods-besteffort-pod8477113f_eede_40b4_a279_de39d3b727d7.slice - libcontainer container kubepods-besteffort-pod8477113f_eede_40b4_a279_de39d3b727d7.slice. Jan 17 00:28:48.134411 systemd[1]: Created slice kubepods-besteffort-pod3f91f1f2_8661_48d9_869a_379dfb590134.slice - libcontainer container kubepods-besteffort-pod3f91f1f2_8661_48d9_869a_379dfb590134.slice. Jan 17 00:28:48.139143 kubelet[3212]: I0117 00:28:48.139103 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt949\" (UniqueName: \"kubernetes.io/projected/8477113f-eede-40b4-a279-de39d3b727d7-kube-api-access-rt949\") pod \"kube-proxy-lk5cb\" (UID: \"8477113f-eede-40b4-a279-de39d3b727d7\") " pod="kube-system/kube-proxy-lk5cb" Jan 17 00:28:48.139528 kubelet[3212]: I0117 00:28:48.139150 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f91f1f2-8661-48d9-869a-379dfb590134-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-k9rhv\" (UID: \"3f91f1f2-8661-48d9-869a-379dfb590134\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-k9rhv" Jan 17 00:28:48.139528 kubelet[3212]: I0117 00:28:48.139175 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vdhr\" (UniqueName: \"kubernetes.io/projected/3f91f1f2-8661-48d9-869a-379dfb590134-kube-api-access-5vdhr\") pod \"tigera-operator-65cdcdfd6d-k9rhv\" (UID: \"3f91f1f2-8661-48d9-869a-379dfb590134\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-k9rhv" Jan 17 00:28:48.139528 kubelet[3212]: I0117 00:28:48.139199 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8477113f-eede-40b4-a279-de39d3b727d7-kube-proxy\") pod \"kube-proxy-lk5cb\" (UID: \"8477113f-eede-40b4-a279-de39d3b727d7\") " pod="kube-system/kube-proxy-lk5cb" Jan 17 00:28:48.139528 kubelet[3212]: I0117 00:28:48.139219 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8477113f-eede-40b4-a279-de39d3b727d7-xtables-lock\") pod \"kube-proxy-lk5cb\" (UID: \"8477113f-eede-40b4-a279-de39d3b727d7\") " pod="kube-system/kube-proxy-lk5cb" Jan 17 00:28:48.139528 kubelet[3212]: I0117 00:28:48.139238 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8477113f-eede-40b4-a279-de39d3b727d7-lib-modules\") pod \"kube-proxy-lk5cb\" (UID: \"8477113f-eede-40b4-a279-de39d3b727d7\") " pod="kube-system/kube-proxy-lk5cb" Jan 17 00:28:48.356365 containerd[1707]: time="2026-01-17T00:28:48.355885773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lk5cb,Uid:8477113f-eede-40b4-a279-de39d3b727d7,Namespace:kube-system,Attempt:0,}" Jan 17 00:28:48.394559 containerd[1707]: time="2026-01-17T00:28:48.394428067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:48.395060 containerd[1707]: time="2026-01-17T00:28:48.394951778Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:48.396958 containerd[1707]: time="2026-01-17T00:28:48.394996479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:48.396958 containerd[1707]: time="2026-01-17T00:28:48.395509690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:48.424057 systemd[1]: Started cri-containerd-69b33a5fd1270cefed53b6f4ae0612e7b4332c483228fc88fa6a2034a5b97f72.scope - libcontainer container 69b33a5fd1270cefed53b6f4ae0612e7b4332c483228fc88fa6a2034a5b97f72. Jan 17 00:28:48.446096 containerd[1707]: time="2026-01-17T00:28:48.446043531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-k9rhv,Uid:3f91f1f2-8661-48d9-869a-379dfb590134,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:28:48.450250 containerd[1707]: time="2026-01-17T00:28:48.450212417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lk5cb,Uid:8477113f-eede-40b4-a279-de39d3b727d7,Namespace:kube-system,Attempt:0,} returns sandbox id \"69b33a5fd1270cefed53b6f4ae0612e7b4332c483228fc88fa6a2034a5b97f72\"" Jan 17 00:28:48.466745 containerd[1707]: time="2026-01-17T00:28:48.466584355Z" level=info msg="CreateContainer within sandbox \"69b33a5fd1270cefed53b6f4ae0612e7b4332c483228fc88fa6a2034a5b97f72\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:28:48.505642 containerd[1707]: time="2026-01-17T00:28:48.505410055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:28:48.505642 containerd[1707]: time="2026-01-17T00:28:48.505468556Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:28:48.505642 containerd[1707]: time="2026-01-17T00:28:48.505490357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:48.505642 containerd[1707]: time="2026-01-17T00:28:48.505581559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:28:48.557908 containerd[1707]: time="2026-01-17T00:28:48.557092120Z" level=info msg="CreateContainer within sandbox \"69b33a5fd1270cefed53b6f4ae0612e7b4332c483228fc88fa6a2034a5b97f72\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"be77bfc4611eb7929fe26c789521c0beec75c99eafefa7d0e3ef8e6ffa39d803\"" Jan 17 00:28:48.560502 containerd[1707]: time="2026-01-17T00:28:48.558161042Z" level=info msg="StartContainer for \"be77bfc4611eb7929fe26c789521c0beec75c99eafefa7d0e3ef8e6ffa39d803\"" Jan 17 00:28:48.560024 systemd[1]: Started cri-containerd-b537d14a6bb6026bf6ddb8f8c57c21e942925d9298d9af8f48bb132b773c044a.scope - libcontainer container b537d14a6bb6026bf6ddb8f8c57c21e942925d9298d9af8f48bb132b773c044a. Jan 17 00:28:48.607172 systemd[1]: Started cri-containerd-be77bfc4611eb7929fe26c789521c0beec75c99eafefa7d0e3ef8e6ffa39d803.scope - libcontainer container be77bfc4611eb7929fe26c789521c0beec75c99eafefa7d0e3ef8e6ffa39d803. Jan 17 00:28:48.640792 containerd[1707]: time="2026-01-17T00:28:48.640744545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-k9rhv,Uid:3f91f1f2-8661-48d9-869a-379dfb590134,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b537d14a6bb6026bf6ddb8f8c57c21e942925d9298d9af8f48bb132b773c044a\"" Jan 17 00:28:48.645372 containerd[1707]: time="2026-01-17T00:28:48.645083634Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:28:48.655919 containerd[1707]: time="2026-01-17T00:28:48.655880857Z" level=info msg="StartContainer for \"be77bfc4611eb7929fe26c789521c0beec75c99eafefa7d0e3ef8e6ffa39d803\" returns successfully" Jan 17 00:28:49.393589 kubelet[3212]: I0117 00:28:49.393485 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lk5cb" podStartSLOduration=2.39346406 podStartE2EDuration="2.39346406s" podCreationTimestamp="2026-01-17 00:28:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:28:49.000061851 +0000 UTC m=+6.198497822" watchObservedRunningTime="2026-01-17 00:28:49.39346406 +0000 UTC m=+6.591900031" Jan 17 00:28:50.422816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1535301731.mount: Deactivated successfully. Jan 17 00:28:51.127808 containerd[1707]: time="2026-01-17T00:28:51.127747902Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:51.130312 containerd[1707]: time="2026-01-17T00:28:51.130156553Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=25061691" Jan 17 00:28:51.133839 containerd[1707]: time="2026-01-17T00:28:51.132616405Z" level=info msg="ImageCreate event name:\"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:51.139951 containerd[1707]: time="2026-01-17T00:28:51.138788035Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:28:51.140642 containerd[1707]: time="2026-01-17T00:28:51.140612674Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"25057686\" in 2.495484639s" Jan 17 00:28:51.140749 containerd[1707]: time="2026-01-17T00:28:51.140732177Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:f2c1be207523e593db82e3b8cf356a12f3ad8d1aad2225f8114b2cf9d6486cf1\"" Jan 17 00:28:51.149620 containerd[1707]: time="2026-01-17T00:28:51.149587164Z" level=info msg="CreateContainer within sandbox \"b537d14a6bb6026bf6ddb8f8c57c21e942925d9298d9af8f48bb132b773c044a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:28:51.178453 containerd[1707]: time="2026-01-17T00:28:51.178412074Z" level=info msg="CreateContainer within sandbox \"b537d14a6bb6026bf6ddb8f8c57c21e942925d9298d9af8f48bb132b773c044a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"387b1c62cd4bc610beccb874bf69b7611fa9572ecf45fad9ec7ca47ffa31f734\"" Jan 17 00:28:51.178997 containerd[1707]: time="2026-01-17T00:28:51.178971386Z" level=info msg="StartContainer for \"387b1c62cd4bc610beccb874bf69b7611fa9572ecf45fad9ec7ca47ffa31f734\"" Jan 17 00:28:51.211021 systemd[1]: Started cri-containerd-387b1c62cd4bc610beccb874bf69b7611fa9572ecf45fad9ec7ca47ffa31f734.scope - libcontainer container 387b1c62cd4bc610beccb874bf69b7611fa9572ecf45fad9ec7ca47ffa31f734. Jan 17 00:28:51.245143 containerd[1707]: time="2026-01-17T00:28:51.245095985Z" level=info msg="StartContainer for \"387b1c62cd4bc610beccb874bf69b7611fa9572ecf45fad9ec7ca47ffa31f734\" returns successfully" Jan 17 00:28:52.002945 kubelet[3212]: I0117 00:28:52.002305 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-k9rhv" podStartSLOduration=1.502350676 podStartE2EDuration="4.002286908s" podCreationTimestamp="2026-01-17 00:28:48 +0000 UTC" firstStartedPulling="2026-01-17 00:28:48.642697385 +0000 UTC m=+5.841133456" lastFinishedPulling="2026-01-17 00:28:51.142633617 +0000 UTC m=+8.341069688" observedRunningTime="2026-01-17 00:28:52.0018782 +0000 UTC m=+9.200314171" watchObservedRunningTime="2026-01-17 00:28:52.002286908 +0000 UTC m=+9.200722879" Jan 17 00:28:57.677368 sudo[2313]: pam_unix(sudo:session): session closed for user root Jan 17 00:28:57.780527 sshd[2310]: pam_unix(sshd:session): session closed for user core Jan 17 00:28:57.784607 systemd[1]: sshd@6-10.200.8.17:22-10.200.16.10:43812.service: Deactivated successfully. Jan 17 00:28:57.789466 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:28:57.789672 systemd[1]: session-9.scope: Consumed 4.744s CPU time, 161.4M memory peak, 0B memory swap peak. Jan 17 00:28:57.792272 systemd-logind[1685]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:28:57.794292 systemd-logind[1685]: Removed session 9. Jan 17 00:29:04.071315 systemd[1]: Created slice kubepods-besteffort-pod798f257f_7a9f_4fbf_9247_8a12ce2926b3.slice - libcontainer container kubepods-besteffort-pod798f257f_7a9f_4fbf_9247_8a12ce2926b3.slice. Jan 17 00:29:04.157331 kubelet[3212]: I0117 00:29:04.157072 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/798f257f-7a9f-4fbf-9247-8a12ce2926b3-tigera-ca-bundle\") pod \"calico-typha-64c6fcdc9d-hflgc\" (UID: \"798f257f-7a9f-4fbf-9247-8a12ce2926b3\") " pod="calico-system/calico-typha-64c6fcdc9d-hflgc" Jan 17 00:29:04.157331 kubelet[3212]: I0117 00:29:04.157132 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp7c5\" (UniqueName: \"kubernetes.io/projected/798f257f-7a9f-4fbf-9247-8a12ce2926b3-kube-api-access-dp7c5\") pod \"calico-typha-64c6fcdc9d-hflgc\" (UID: \"798f257f-7a9f-4fbf-9247-8a12ce2926b3\") " pod="calico-system/calico-typha-64c6fcdc9d-hflgc" Jan 17 00:29:04.157331 kubelet[3212]: I0117 00:29:04.157158 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/798f257f-7a9f-4fbf-9247-8a12ce2926b3-typha-certs\") pod \"calico-typha-64c6fcdc9d-hflgc\" (UID: \"798f257f-7a9f-4fbf-9247-8a12ce2926b3\") " pod="calico-system/calico-typha-64c6fcdc9d-hflgc" Jan 17 00:29:04.298426 systemd[1]: Created slice kubepods-besteffort-pod86d84c39_0b83_4bcd_8032_311655b499c1.slice - libcontainer container kubepods-besteffort-pod86d84c39_0b83_4bcd_8032_311655b499c1.slice. Jan 17 00:29:04.358745 kubelet[3212]: I0117 00:29:04.358575 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/86d84c39-0b83-4bcd-8032-311655b499c1-cni-net-dir\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.358745 kubelet[3212]: I0117 00:29:04.358630 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/86d84c39-0b83-4bcd-8032-311655b499c1-var-lib-calico\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.358745 kubelet[3212]: I0117 00:29:04.358655 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/86d84c39-0b83-4bcd-8032-311655b499c1-cni-bin-dir\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.358745 kubelet[3212]: I0117 00:29:04.358675 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/86d84c39-0b83-4bcd-8032-311655b499c1-cni-log-dir\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.358745 kubelet[3212]: I0117 00:29:04.358694 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/86d84c39-0b83-4bcd-8032-311655b499c1-node-certs\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.359123 kubelet[3212]: I0117 00:29:04.358713 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/86d84c39-0b83-4bcd-8032-311655b499c1-tigera-ca-bundle\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.359123 kubelet[3212]: I0117 00:29:04.358732 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86d84c39-0b83-4bcd-8032-311655b499c1-xtables-lock\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.359123 kubelet[3212]: I0117 00:29:04.358756 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/86d84c39-0b83-4bcd-8032-311655b499c1-policysync\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.359123 kubelet[3212]: I0117 00:29:04.358779 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87dzs\" (UniqueName: \"kubernetes.io/projected/86d84c39-0b83-4bcd-8032-311655b499c1-kube-api-access-87dzs\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.359123 kubelet[3212]: I0117 00:29:04.358799 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/86d84c39-0b83-4bcd-8032-311655b499c1-flexvol-driver-host\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.359323 kubelet[3212]: I0117 00:29:04.358821 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86d84c39-0b83-4bcd-8032-311655b499c1-lib-modules\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.359323 kubelet[3212]: I0117 00:29:04.358890 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/86d84c39-0b83-4bcd-8032-311655b499c1-var-run-calico\") pod \"calico-node-b5k92\" (UID: \"86d84c39-0b83-4bcd-8032-311655b499c1\") " pod="calico-system/calico-node-b5k92" Jan 17 00:29:04.381709 containerd[1707]: time="2026-01-17T00:29:04.381652789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64c6fcdc9d-hflgc,Uid:798f257f-7a9f-4fbf-9247-8a12ce2926b3,Namespace:calico-system,Attempt:0,}" Jan 17 00:29:04.432974 containerd[1707]: time="2026-01-17T00:29:04.432570274Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:04.432974 containerd[1707]: time="2026-01-17T00:29:04.432671376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:04.432974 containerd[1707]: time="2026-01-17T00:29:04.432701377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:04.432974 containerd[1707]: time="2026-01-17T00:29:04.432798579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:04.463062 systemd[1]: Started cri-containerd-c12cd733d5a860205ffcbdd78f9dd6cd9dba56a55b368d7fb12521a8f061cc36.scope - libcontainer container c12cd733d5a860205ffcbdd78f9dd6cd9dba56a55b368d7fb12521a8f061cc36. Jan 17 00:29:04.466510 kubelet[3212]: E0117 00:29:04.465589 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.466510 kubelet[3212]: W0117 00:29:04.465737 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.466510 kubelet[3212]: E0117 00:29:04.465884 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.466510 kubelet[3212]: E0117 00:29:04.466421 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.466510 kubelet[3212]: W0117 00:29:04.466443 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.466510 kubelet[3212]: E0117 00:29:04.466459 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.467559 kubelet[3212]: E0117 00:29:04.466813 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.467559 kubelet[3212]: W0117 00:29:04.466825 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.467559 kubelet[3212]: E0117 00:29:04.466866 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.468812 kubelet[3212]: E0117 00:29:04.468666 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.468812 kubelet[3212]: W0117 00:29:04.468684 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.468812 kubelet[3212]: E0117 00:29:04.468730 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.470006 kubelet[3212]: E0117 00:29:04.469837 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.470006 kubelet[3212]: W0117 00:29:04.469888 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.470006 kubelet[3212]: E0117 00:29:04.469905 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.470727 kubelet[3212]: E0117 00:29:04.470340 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.470727 kubelet[3212]: W0117 00:29:04.470352 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.470727 kubelet[3212]: E0117 00:29:04.470365 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.470727 kubelet[3212]: E0117 00:29:04.470661 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.470727 kubelet[3212]: W0117 00:29:04.470673 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.470727 kubelet[3212]: E0117 00:29:04.470687 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.472178 kubelet[3212]: E0117 00:29:04.470941 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.472178 kubelet[3212]: W0117 00:29:04.470953 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.472178 kubelet[3212]: E0117 00:29:04.470967 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.472178 kubelet[3212]: E0117 00:29:04.471206 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.472178 kubelet[3212]: W0117 00:29:04.471218 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.472178 kubelet[3212]: E0117 00:29:04.471232 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.472178 kubelet[3212]: E0117 00:29:04.471472 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.472178 kubelet[3212]: W0117 00:29:04.471484 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.472178 kubelet[3212]: E0117 00:29:04.471498 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.472178 kubelet[3212]: E0117 00:29:04.471719 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.473124 kubelet[3212]: W0117 00:29:04.471731 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.473124 kubelet[3212]: E0117 00:29:04.471766 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.473124 kubelet[3212]: E0117 00:29:04.472014 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.473124 kubelet[3212]: W0117 00:29:04.472025 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.473124 kubelet[3212]: E0117 00:29:04.472038 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.473124 kubelet[3212]: E0117 00:29:04.472483 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.473124 kubelet[3212]: W0117 00:29:04.472497 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.473124 kubelet[3212]: E0117 00:29:04.472510 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.473124 kubelet[3212]: E0117 00:29:04.472754 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.473124 kubelet[3212]: W0117 00:29:04.472766 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.475218 kubelet[3212]: E0117 00:29:04.472779 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.475218 kubelet[3212]: E0117 00:29:04.473158 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.475218 kubelet[3212]: W0117 00:29:04.473171 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.475218 kubelet[3212]: E0117 00:29:04.473187 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.475218 kubelet[3212]: E0117 00:29:04.473416 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.475218 kubelet[3212]: W0117 00:29:04.473428 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.475218 kubelet[3212]: E0117 00:29:04.473442 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.475218 kubelet[3212]: E0117 00:29:04.475002 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.475218 kubelet[3212]: W0117 00:29:04.475015 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.475218 kubelet[3212]: E0117 00:29:04.475029 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.491528 kubelet[3212]: E0117 00:29:04.491451 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.491528 kubelet[3212]: W0117 00:29:04.491471 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.491528 kubelet[3212]: E0117 00:29:04.491488 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.526532 kubelet[3212]: E0117 00:29:04.526476 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:29:04.531240 kubelet[3212]: E0117 00:29:04.531182 3212 status_manager.go:1018] "Failed to get status for pod" err="pods \"csi-node-driver-7kvdv\" is forbidden: User \"system:node:ci-4081.3.6-n-c809bb5d02\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-n-c809bb5d02' and this object" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" pod="calico-system/csi-node-driver-7kvdv" Jan 17 00:29:04.531792 kubelet[3212]: E0117 00:29:04.531762 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.531792 kubelet[3212]: W0117 00:29:04.531789 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.531991 kubelet[3212]: E0117 00:29:04.531812 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.532161 kubelet[3212]: E0117 00:29:04.532143 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.532238 kubelet[3212]: W0117 00:29:04.532163 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.532238 kubelet[3212]: E0117 00:29:04.532179 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.532481 kubelet[3212]: E0117 00:29:04.532441 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.532481 kubelet[3212]: W0117 00:29:04.532459 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.532481 kubelet[3212]: E0117 00:29:04.532473 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.533165 kubelet[3212]: E0117 00:29:04.533127 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.533165 kubelet[3212]: W0117 00:29:04.533147 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.533304 kubelet[3212]: E0117 00:29:04.533189 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.533671 kubelet[3212]: E0117 00:29:04.533638 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.533671 kubelet[3212]: W0117 00:29:04.533668 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.534034 kubelet[3212]: E0117 00:29:04.533683 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.534110 kubelet[3212]: E0117 00:29:04.534051 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.534110 kubelet[3212]: W0117 00:29:04.534062 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.534110 kubelet[3212]: E0117 00:29:04.534076 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.534316 kubelet[3212]: E0117 00:29:04.534301 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.534387 kubelet[3212]: W0117 00:29:04.534316 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.534387 kubelet[3212]: E0117 00:29:04.534334 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.534571 kubelet[3212]: E0117 00:29:04.534556 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.534571 kubelet[3212]: W0117 00:29:04.534570 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.534743 kubelet[3212]: E0117 00:29:04.534583 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.534866 kubelet[3212]: E0117 00:29:04.534828 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.534866 kubelet[3212]: W0117 00:29:04.534863 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.534981 kubelet[3212]: E0117 00:29:04.534877 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.536717 kubelet[3212]: E0117 00:29:04.535080 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.536717 kubelet[3212]: W0117 00:29:04.535092 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.536717 kubelet[3212]: E0117 00:29:04.535105 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.536717 kubelet[3212]: E0117 00:29:04.535277 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.536717 kubelet[3212]: W0117 00:29:04.535286 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.536717 kubelet[3212]: E0117 00:29:04.535295 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.536717 kubelet[3212]: E0117 00:29:04.535453 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.536717 kubelet[3212]: W0117 00:29:04.535461 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.536717 kubelet[3212]: E0117 00:29:04.535469 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.536717 kubelet[3212]: E0117 00:29:04.535748 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.537189 kubelet[3212]: W0117 00:29:04.535757 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.537189 kubelet[3212]: E0117 00:29:04.535768 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.537189 kubelet[3212]: E0117 00:29:04.535972 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.537189 kubelet[3212]: W0117 00:29:04.535982 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.537189 kubelet[3212]: E0117 00:29:04.535992 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.537189 kubelet[3212]: E0117 00:29:04.536159 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.537189 kubelet[3212]: W0117 00:29:04.536167 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.537189 kubelet[3212]: E0117 00:29:04.536176 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.537189 kubelet[3212]: E0117 00:29:04.536333 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.537189 kubelet[3212]: W0117 00:29:04.536341 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.537572 kubelet[3212]: E0117 00:29:04.536350 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.538014 kubelet[3212]: E0117 00:29:04.537991 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.538014 kubelet[3212]: W0117 00:29:04.538014 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.538256 kubelet[3212]: E0117 00:29:04.538029 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.538256 kubelet[3212]: E0117 00:29:04.538243 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.538256 kubelet[3212]: W0117 00:29:04.538254 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.538411 kubelet[3212]: E0117 00:29:04.538267 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.538579 kubelet[3212]: E0117 00:29:04.538559 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.538579 kubelet[3212]: W0117 00:29:04.538579 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.538723 kubelet[3212]: E0117 00:29:04.538592 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.538852 kubelet[3212]: E0117 00:29:04.538823 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.538927 kubelet[3212]: W0117 00:29:04.538910 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.538973 kubelet[3212]: E0117 00:29:04.538932 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.556977 containerd[1707]: time="2026-01-17T00:29:04.556612218Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64c6fcdc9d-hflgc,Uid:798f257f-7a9f-4fbf-9247-8a12ce2926b3,Namespace:calico-system,Attempt:0,} returns sandbox id \"c12cd733d5a860205ffcbdd78f9dd6cd9dba56a55b368d7fb12521a8f061cc36\"" Jan 17 00:29:04.560203 containerd[1707]: time="2026-01-17T00:29:04.559922188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:29:04.561409 kubelet[3212]: E0117 00:29:04.561170 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.561409 kubelet[3212]: W0117 00:29:04.561191 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.561409 kubelet[3212]: E0117 00:29:04.561214 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.561409 kubelet[3212]: I0117 00:29:04.561282 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47118e25-f9cc-45d1-87d8-eb13465b2075-registration-dir\") pod \"csi-node-driver-7kvdv\" (UID: \"47118e25-f9cc-45d1-87d8-eb13465b2075\") " pod="calico-system/csi-node-driver-7kvdv" Jan 17 00:29:04.561749 kubelet[3212]: E0117 00:29:04.561724 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.561810 kubelet[3212]: W0117 00:29:04.561763 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.561810 kubelet[3212]: E0117 00:29:04.561785 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.561951 kubelet[3212]: I0117 00:29:04.561816 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47118e25-f9cc-45d1-87d8-eb13465b2075-socket-dir\") pod \"csi-node-driver-7kvdv\" (UID: \"47118e25-f9cc-45d1-87d8-eb13465b2075\") " pod="calico-system/csi-node-driver-7kvdv" Jan 17 00:29:04.562229 kubelet[3212]: E0117 00:29:04.562209 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.562229 kubelet[3212]: W0117 00:29:04.562226 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.562474 kubelet[3212]: E0117 00:29:04.562242 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.562474 kubelet[3212]: E0117 00:29:04.562472 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.562572 kubelet[3212]: W0117 00:29:04.562486 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.562572 kubelet[3212]: E0117 00:29:04.562499 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.562941 kubelet[3212]: E0117 00:29:04.562919 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.562941 kubelet[3212]: W0117 00:29:04.562938 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.563229 kubelet[3212]: E0117 00:29:04.562964 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.563229 kubelet[3212]: I0117 00:29:04.562994 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkqsz\" (UniqueName: \"kubernetes.io/projected/47118e25-f9cc-45d1-87d8-eb13465b2075-kube-api-access-mkqsz\") pod \"csi-node-driver-7kvdv\" (UID: \"47118e25-f9cc-45d1-87d8-eb13465b2075\") " pod="calico-system/csi-node-driver-7kvdv" Jan 17 00:29:04.563668 kubelet[3212]: E0117 00:29:04.563396 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.563668 kubelet[3212]: W0117 00:29:04.563432 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.563668 kubelet[3212]: E0117 00:29:04.563446 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.565440 kubelet[3212]: E0117 00:29:04.565150 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.565440 kubelet[3212]: W0117 00:29:04.565166 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.565440 kubelet[3212]: E0117 00:29:04.565189 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.565904 kubelet[3212]: E0117 00:29:04.565566 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.565904 kubelet[3212]: W0117 00:29:04.565579 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.565904 kubelet[3212]: E0117 00:29:04.565592 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.565904 kubelet[3212]: I0117 00:29:04.565775 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47118e25-f9cc-45d1-87d8-eb13465b2075-kubelet-dir\") pod \"csi-node-driver-7kvdv\" (UID: \"47118e25-f9cc-45d1-87d8-eb13465b2075\") " pod="calico-system/csi-node-driver-7kvdv" Jan 17 00:29:04.566339 kubelet[3212]: E0117 00:29:04.566320 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.566339 kubelet[3212]: W0117 00:29:04.566336 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.566613 kubelet[3212]: E0117 00:29:04.566352 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.566719 kubelet[3212]: E0117 00:29:04.566671 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.566719 kubelet[3212]: W0117 00:29:04.566683 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.566719 kubelet[3212]: E0117 00:29:04.566696 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.567054 kubelet[3212]: E0117 00:29:04.566950 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.567054 kubelet[3212]: W0117 00:29:04.566965 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.567054 kubelet[3212]: E0117 00:29:04.566979 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.567054 kubelet[3212]: I0117 00:29:04.567009 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/47118e25-f9cc-45d1-87d8-eb13465b2075-varrun\") pod \"csi-node-driver-7kvdv\" (UID: \"47118e25-f9cc-45d1-87d8-eb13465b2075\") " pod="calico-system/csi-node-driver-7kvdv" Jan 17 00:29:04.567392 kubelet[3212]: E0117 00:29:04.567371 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.567392 kubelet[3212]: W0117 00:29:04.567389 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.567504 kubelet[3212]: E0117 00:29:04.567404 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.567921 kubelet[3212]: E0117 00:29:04.567900 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.567921 kubelet[3212]: W0117 00:29:04.567921 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.568188 kubelet[3212]: E0117 00:29:04.567936 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.568980 kubelet[3212]: E0117 00:29:04.568960 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.569069 kubelet[3212]: W0117 00:29:04.568984 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.569069 kubelet[3212]: E0117 00:29:04.568999 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.569697 kubelet[3212]: E0117 00:29:04.569632 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.569697 kubelet[3212]: W0117 00:29:04.569654 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.569697 kubelet[3212]: E0117 00:29:04.569670 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.610271 containerd[1707]: time="2026-01-17T00:29:04.610117958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b5k92,Uid:86d84c39-0b83-4bcd-8032-311655b499c1,Namespace:calico-system,Attempt:0,}" Jan 17 00:29:04.669881 kubelet[3212]: E0117 00:29:04.668402 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.669881 kubelet[3212]: W0117 00:29:04.668441 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.669881 kubelet[3212]: E0117 00:29:04.668472 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.670705 kubelet[3212]: E0117 00:29:04.670406 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.670705 kubelet[3212]: W0117 00:29:04.670431 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.670705 kubelet[3212]: E0117 00:29:04.670456 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.671022 kubelet[3212]: E0117 00:29:04.670900 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.671022 kubelet[3212]: W0117 00:29:04.670913 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.671022 kubelet[3212]: E0117 00:29:04.670932 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.672914 kubelet[3212]: E0117 00:29:04.671919 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.672914 kubelet[3212]: W0117 00:29:04.671935 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.672914 kubelet[3212]: E0117 00:29:04.671949 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.673302 kubelet[3212]: E0117 00:29:04.673285 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.673886 kubelet[3212]: W0117 00:29:04.673628 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.673886 kubelet[3212]: E0117 00:29:04.673652 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.674049 kubelet[3212]: E0117 00:29:04.673924 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.674049 kubelet[3212]: W0117 00:29:04.673937 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.674049 kubelet[3212]: E0117 00:29:04.673950 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.675176 kubelet[3212]: E0117 00:29:04.675036 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.675176 kubelet[3212]: W0117 00:29:04.675054 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.675176 kubelet[3212]: E0117 00:29:04.675069 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.675508 kubelet[3212]: E0117 00:29:04.675306 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.675508 kubelet[3212]: W0117 00:29:04.675317 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.675508 kubelet[3212]: E0117 00:29:04.675331 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.676022 kubelet[3212]: E0117 00:29:04.675805 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.676022 kubelet[3212]: W0117 00:29:04.675819 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.676022 kubelet[3212]: E0117 00:29:04.675832 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.676869 kubelet[3212]: E0117 00:29:04.676257 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.676869 kubelet[3212]: W0117 00:29:04.676271 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.676869 kubelet[3212]: E0117 00:29:04.676283 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.677408 kubelet[3212]: E0117 00:29:04.677180 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.677408 kubelet[3212]: W0117 00:29:04.677195 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.677408 kubelet[3212]: E0117 00:29:04.677209 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.677778 kubelet[3212]: E0117 00:29:04.677649 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.677778 kubelet[3212]: W0117 00:29:04.677661 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.677778 kubelet[3212]: E0117 00:29:04.677679 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.679347 kubelet[3212]: E0117 00:29:04.679207 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.679347 kubelet[3212]: W0117 00:29:04.679222 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.679347 kubelet[3212]: E0117 00:29:04.679236 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.679769 kubelet[3212]: E0117 00:29:04.679645 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.679769 kubelet[3212]: W0117 00:29:04.679659 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.679769 kubelet[3212]: E0117 00:29:04.679673 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.680224 kubelet[3212]: E0117 00:29:04.680101 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.680224 kubelet[3212]: W0117 00:29:04.680116 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.680224 kubelet[3212]: E0117 00:29:04.680129 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.680538 kubelet[3212]: E0117 00:29:04.680520 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.680538 kubelet[3212]: W0117 00:29:04.680537 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.680773 kubelet[3212]: E0117 00:29:04.680554 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.680821 kubelet[3212]: E0117 00:29:04.680791 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.680821 kubelet[3212]: W0117 00:29:04.680802 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.681944 kubelet[3212]: E0117 00:29:04.680816 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.682413 kubelet[3212]: E0117 00:29:04.682394 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.682413 kubelet[3212]: W0117 00:29:04.682411 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.682686 kubelet[3212]: E0117 00:29:04.682426 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.682947 kubelet[3212]: E0117 00:29:04.682925 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.684284 kubelet[3212]: W0117 00:29:04.684258 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.684372 kubelet[3212]: E0117 00:29:04.684286 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.685040 kubelet[3212]: E0117 00:29:04.684957 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.685040 kubelet[3212]: W0117 00:29:04.684972 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.685040 kubelet[3212]: E0117 00:29:04.684988 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.685887 containerd[1707]: time="2026-01-17T00:29:04.685156457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:04.685887 containerd[1707]: time="2026-01-17T00:29:04.685232759Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:04.685887 containerd[1707]: time="2026-01-17T00:29:04.685255659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:04.686056 kubelet[3212]: E0117 00:29:04.685453 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.686056 kubelet[3212]: W0117 00:29:04.685464 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.686056 kubelet[3212]: E0117 00:29:04.685477 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.687137 kubelet[3212]: E0117 00:29:04.686458 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.687137 kubelet[3212]: W0117 00:29:04.686473 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.687137 kubelet[3212]: E0117 00:29:04.686487 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.687369 containerd[1707]: time="2026-01-17T00:29:04.685922874Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:04.688106 kubelet[3212]: E0117 00:29:04.687479 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.688106 kubelet[3212]: W0117 00:29:04.687499 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.688106 kubelet[3212]: E0117 00:29:04.687526 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.688970 kubelet[3212]: E0117 00:29:04.688320 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.688970 kubelet[3212]: W0117 00:29:04.688349 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.688970 kubelet[3212]: E0117 00:29:04.688363 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.689466 kubelet[3212]: E0117 00:29:04.689270 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.689466 kubelet[3212]: W0117 00:29:04.689286 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.689466 kubelet[3212]: E0117 00:29:04.689301 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.722578 systemd[1]: Started cri-containerd-649312ed767913946304d735a6f997f7253556eff645942cd4006b20ff8f5727.scope - libcontainer container 649312ed767913946304d735a6f997f7253556eff645942cd4006b20ff8f5727. Jan 17 00:29:04.728146 kubelet[3212]: E0117 00:29:04.725245 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:04.728146 kubelet[3212]: W0117 00:29:04.728068 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:04.728146 kubelet[3212]: E0117 00:29:04.728095 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:04.847154 containerd[1707]: time="2026-01-17T00:29:04.846985506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-b5k92,Uid:86d84c39-0b83-4bcd-8032-311655b499c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"649312ed767913946304d735a6f997f7253556eff645942cd4006b20ff8f5727\"" Jan 17 00:29:05.691343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956145323.mount: Deactivated successfully. Jan 17 00:29:05.910593 kubelet[3212]: E0117 00:29:05.910525 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:29:06.821562 containerd[1707]: time="2026-01-17T00:29:06.821508787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:06.824293 containerd[1707]: time="2026-01-17T00:29:06.824115242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=35234628" Jan 17 00:29:06.827733 containerd[1707]: time="2026-01-17T00:29:06.827411712Z" level=info msg="ImageCreate event name:\"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:06.832196 containerd[1707]: time="2026-01-17T00:29:06.832144913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:06.833190 containerd[1707]: time="2026-01-17T00:29:06.832753626Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"35234482\" in 2.272788137s" Jan 17 00:29:06.833190 containerd[1707]: time="2026-01-17T00:29:06.832799927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:aa1490366a77160b4cc8f9af82281ab7201ffda0882871f860e1eb1c4f825958\"" Jan 17 00:29:06.834681 containerd[1707]: time="2026-01-17T00:29:06.834214857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:29:06.859819 containerd[1707]: time="2026-01-17T00:29:06.859777802Z" level=info msg="CreateContainer within sandbox \"c12cd733d5a860205ffcbdd78f9dd6cd9dba56a55b368d7fb12521a8f061cc36\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:29:06.893993 containerd[1707]: time="2026-01-17T00:29:06.893952531Z" level=info msg="CreateContainer within sandbox \"c12cd733d5a860205ffcbdd78f9dd6cd9dba56a55b368d7fb12521a8f061cc36\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9f8087b2843cf70366f5cc6df250fab9bb4f89d3c1ce41ca1b6d51012a86ba99\"" Jan 17 00:29:06.895236 containerd[1707]: time="2026-01-17T00:29:06.894536443Z" level=info msg="StartContainer for \"9f8087b2843cf70366f5cc6df250fab9bb4f89d3c1ce41ca1b6d51012a86ba99\"" Jan 17 00:29:06.934045 systemd[1]: Started cri-containerd-9f8087b2843cf70366f5cc6df250fab9bb4f89d3c1ce41ca1b6d51012a86ba99.scope - libcontainer container 9f8087b2843cf70366f5cc6df250fab9bb4f89d3c1ce41ca1b6d51012a86ba99. Jan 17 00:29:06.995379 containerd[1707]: time="2026-01-17T00:29:06.995186794Z" level=info msg="StartContainer for \"9f8087b2843cf70366f5cc6df250fab9bb4f89d3c1ce41ca1b6d51012a86ba99\" returns successfully" Jan 17 00:29:07.055345 kubelet[3212]: E0117 00:29:07.055296 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.056530 kubelet[3212]: W0117 00:29:07.055428 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.056530 kubelet[3212]: E0117 00:29:07.055456 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.056530 kubelet[3212]: E0117 00:29:07.056080 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.056530 kubelet[3212]: W0117 00:29:07.056096 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.056530 kubelet[3212]: E0117 00:29:07.056127 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.058806 kubelet[3212]: E0117 00:29:07.058775 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.058806 kubelet[3212]: W0117 00:29:07.058796 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.058984 kubelet[3212]: E0117 00:29:07.058859 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.059238 kubelet[3212]: E0117 00:29:07.059207 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.059314 kubelet[3212]: W0117 00:29:07.059255 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.059314 kubelet[3212]: E0117 00:29:07.059274 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.059621 kubelet[3212]: E0117 00:29:07.059602 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.059705 kubelet[3212]: W0117 00:29:07.059649 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.059705 kubelet[3212]: E0117 00:29:07.059666 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.059982 kubelet[3212]: E0117 00:29:07.059965 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.059982 kubelet[3212]: W0117 00:29:07.059982 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.060116 kubelet[3212]: E0117 00:29:07.059995 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.060269 kubelet[3212]: E0117 00:29:07.060255 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.060269 kubelet[3212]: W0117 00:29:07.060269 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.060395 kubelet[3212]: E0117 00:29:07.060282 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.060576 kubelet[3212]: E0117 00:29:07.060556 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.060576 kubelet[3212]: W0117 00:29:07.060574 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.060691 kubelet[3212]: E0117 00:29:07.060588 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.060876 kubelet[3212]: E0117 00:29:07.060834 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.060876 kubelet[3212]: W0117 00:29:07.060872 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.061977 kubelet[3212]: E0117 00:29:07.060886 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.061977 kubelet[3212]: E0117 00:29:07.061109 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.061977 kubelet[3212]: W0117 00:29:07.061146 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.061977 kubelet[3212]: E0117 00:29:07.061159 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.061977 kubelet[3212]: E0117 00:29:07.061374 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.061977 kubelet[3212]: W0117 00:29:07.061384 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.061977 kubelet[3212]: E0117 00:29:07.061396 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.061977 kubelet[3212]: E0117 00:29:07.061591 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.061977 kubelet[3212]: W0117 00:29:07.061601 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.061977 kubelet[3212]: E0117 00:29:07.061612 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.062507 kubelet[3212]: E0117 00:29:07.061856 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.062507 kubelet[3212]: W0117 00:29:07.061883 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.062507 kubelet[3212]: E0117 00:29:07.061896 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.062507 kubelet[3212]: E0117 00:29:07.062111 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.062507 kubelet[3212]: W0117 00:29:07.062122 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.062507 kubelet[3212]: E0117 00:29:07.062136 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.062507 kubelet[3212]: E0117 00:29:07.062330 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.062507 kubelet[3212]: W0117 00:29:07.062339 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.062507 kubelet[3212]: E0117 00:29:07.062370 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.078805 kubelet[3212]: I0117 00:29:07.078640 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64c6fcdc9d-hflgc" podStartSLOduration=0.803778522 podStartE2EDuration="3.078619102s" podCreationTimestamp="2026-01-17 00:29:04 +0000 UTC" firstStartedPulling="2026-01-17 00:29:04.55904047 +0000 UTC m=+21.757476441" lastFinishedPulling="2026-01-17 00:29:06.83388105 +0000 UTC m=+24.032317021" observedRunningTime="2026-01-17 00:29:07.052987847 +0000 UTC m=+24.251423818" watchObservedRunningTime="2026-01-17 00:29:07.078619102 +0000 UTC m=+24.277055173" Jan 17 00:29:07.094770 kubelet[3212]: E0117 00:29:07.094122 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.094770 kubelet[3212]: W0117 00:29:07.094150 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.094770 kubelet[3212]: E0117 00:29:07.094172 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.094770 kubelet[3212]: E0117 00:29:07.094528 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.094770 kubelet[3212]: W0117 00:29:07.094541 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.094770 kubelet[3212]: E0117 00:29:07.094604 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.095299 kubelet[3212]: E0117 00:29:07.095114 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.095299 kubelet[3212]: W0117 00:29:07.095137 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.095299 kubelet[3212]: E0117 00:29:07.095153 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.095477 kubelet[3212]: E0117 00:29:07.095455 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.095477 kubelet[3212]: W0117 00:29:07.095474 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.095584 kubelet[3212]: E0117 00:29:07.095490 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.095944 kubelet[3212]: E0117 00:29:07.095918 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.095944 kubelet[3212]: W0117 00:29:07.095937 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.096927 kubelet[3212]: E0117 00:29:07.095952 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.096927 kubelet[3212]: E0117 00:29:07.096252 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.096927 kubelet[3212]: W0117 00:29:07.096264 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.096927 kubelet[3212]: E0117 00:29:07.096290 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.096927 kubelet[3212]: E0117 00:29:07.096572 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.096927 kubelet[3212]: W0117 00:29:07.096584 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.096927 kubelet[3212]: E0117 00:29:07.096596 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.096927 kubelet[3212]: E0117 00:29:07.096883 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.096927 kubelet[3212]: W0117 00:29:07.096896 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.096927 kubelet[3212]: E0117 00:29:07.096909 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.097344 kubelet[3212]: E0117 00:29:07.097224 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.097344 kubelet[3212]: W0117 00:29:07.097236 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.097344 kubelet[3212]: E0117 00:29:07.097266 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.097878 kubelet[3212]: E0117 00:29:07.097829 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.097974 kubelet[3212]: W0117 00:29:07.097886 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.097974 kubelet[3212]: E0117 00:29:07.097903 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.098183 kubelet[3212]: E0117 00:29:07.098118 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.098183 kubelet[3212]: W0117 00:29:07.098132 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.098183 kubelet[3212]: E0117 00:29:07.098144 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.098415 kubelet[3212]: E0117 00:29:07.098393 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.098415 kubelet[3212]: W0117 00:29:07.098410 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.098523 kubelet[3212]: E0117 00:29:07.098424 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.098990 kubelet[3212]: E0117 00:29:07.098966 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.098990 kubelet[3212]: W0117 00:29:07.098984 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.099226 kubelet[3212]: E0117 00:29:07.098997 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.099226 kubelet[3212]: E0117 00:29:07.099218 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.099329 kubelet[3212]: W0117 00:29:07.099230 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.099329 kubelet[3212]: E0117 00:29:07.099242 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.100873 kubelet[3212]: E0117 00:29:07.099503 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.100873 kubelet[3212]: W0117 00:29:07.099517 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.100873 kubelet[3212]: E0117 00:29:07.099530 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.100873 kubelet[3212]: E0117 00:29:07.099761 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.100873 kubelet[3212]: W0117 00:29:07.099771 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.100873 kubelet[3212]: E0117 00:29:07.099782 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.100873 kubelet[3212]: E0117 00:29:07.100033 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.100873 kubelet[3212]: W0117 00:29:07.100044 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.100873 kubelet[3212]: E0117 00:29:07.100055 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.101170 kubelet[3212]: E0117 00:29:07.100885 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:07.101170 kubelet[3212]: W0117 00:29:07.100897 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:07.101170 kubelet[3212]: E0117 00:29:07.100910 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:07.911457 kubelet[3212]: E0117 00:29:07.911405 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:29:08.072352 kubelet[3212]: E0117 00:29:08.071867 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.072352 kubelet[3212]: W0117 00:29:08.071906 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.072352 kubelet[3212]: E0117 00:29:08.071944 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.074052 kubelet[3212]: E0117 00:29:08.073252 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.074052 kubelet[3212]: W0117 00:29:08.073268 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.074052 kubelet[3212]: E0117 00:29:08.073305 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.074052 kubelet[3212]: E0117 00:29:08.073687 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.074052 kubelet[3212]: W0117 00:29:08.073712 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.074052 kubelet[3212]: E0117 00:29:08.073728 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.074392 containerd[1707]: time="2026-01-17T00:29:08.073513174Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:08.078126 kubelet[3212]: E0117 00:29:08.075464 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.078126 kubelet[3212]: W0117 00:29:08.075494 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.078126 kubelet[3212]: E0117 00:29:08.075509 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.078126 kubelet[3212]: E0117 00:29:08.075904 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.078126 kubelet[3212]: W0117 00:29:08.075917 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.078126 kubelet[3212]: E0117 00:29:08.075946 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.078126 kubelet[3212]: E0117 00:29:08.076282 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.078126 kubelet[3212]: W0117 00:29:08.076294 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.078126 kubelet[3212]: E0117 00:29:08.076306 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.079322 kubelet[3212]: E0117 00:29:08.078691 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.079322 kubelet[3212]: W0117 00:29:08.078707 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.079322 kubelet[3212]: E0117 00:29:08.078724 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.079322 kubelet[3212]: E0117 00:29:08.079033 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.079322 kubelet[3212]: W0117 00:29:08.079044 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.079322 kubelet[3212]: E0117 00:29:08.079074 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.079985 kubelet[3212]: E0117 00:29:08.079474 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.079985 kubelet[3212]: W0117 00:29:08.079486 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.080100 containerd[1707]: time="2026-01-17T00:29:08.079727008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4446754" Jan 17 00:29:08.080441 kubelet[3212]: E0117 00:29:08.080181 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.081070 kubelet[3212]: E0117 00:29:08.081056 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.081274 kubelet[3212]: W0117 00:29:08.081187 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.081274 kubelet[3212]: E0117 00:29:08.081206 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.082240 containerd[1707]: time="2026-01-17T00:29:08.082034358Z" level=info msg="ImageCreate event name:\"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:08.082345 kubelet[3212]: E0117 00:29:08.082092 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.082345 kubelet[3212]: W0117 00:29:08.082106 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.082345 kubelet[3212]: E0117 00:29:08.082119 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.083426 kubelet[3212]: E0117 00:29:08.083405 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.083426 kubelet[3212]: W0117 00:29:08.083423 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.083605 kubelet[3212]: E0117 00:29:08.083441 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.084058 kubelet[3212]: E0117 00:29:08.084038 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.084058 kubelet[3212]: W0117 00:29:08.084057 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.084322 kubelet[3212]: E0117 00:29:08.084071 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.085873 kubelet[3212]: E0117 00:29:08.085017 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.085873 kubelet[3212]: W0117 00:29:08.085031 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.085873 kubelet[3212]: E0117 00:29:08.085046 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.085873 kubelet[3212]: E0117 00:29:08.085303 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.085873 kubelet[3212]: W0117 00:29:08.085315 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.085873 kubelet[3212]: E0117 00:29:08.085328 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.087376 containerd[1707]: time="2026-01-17T00:29:08.087199470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:08.089490 containerd[1707]: time="2026-01-17T00:29:08.088340595Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5941314\" in 1.254092037s" Jan 17 00:29:08.089490 containerd[1707]: time="2026-01-17T00:29:08.088392896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:570719e9c34097019014ae2ad94edf4e523bc6892e77fb1c64c23e5b7f390fe5\"" Jan 17 00:29:08.107040 kubelet[3212]: E0117 00:29:08.106808 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.107040 kubelet[3212]: W0117 00:29:08.106837 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.107040 kubelet[3212]: E0117 00:29:08.106900 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.107593 containerd[1707]: time="2026-01-17T00:29:08.107536411Z" level=info msg="CreateContainer within sandbox \"649312ed767913946304d735a6f997f7253556eff645942cd4006b20ff8f5727\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:29:08.108594 kubelet[3212]: E0117 00:29:08.108566 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.108594 kubelet[3212]: W0117 00:29:08.108587 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.108766 kubelet[3212]: E0117 00:29:08.108605 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.110191 kubelet[3212]: E0117 00:29:08.110170 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.110191 kubelet[3212]: W0117 00:29:08.110188 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.110333 kubelet[3212]: E0117 00:29:08.110204 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.110677 kubelet[3212]: E0117 00:29:08.110513 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.110677 kubelet[3212]: W0117 00:29:08.110529 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.110677 kubelet[3212]: E0117 00:29:08.110544 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.110958 kubelet[3212]: E0117 00:29:08.110941 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.110958 kubelet[3212]: W0117 00:29:08.110956 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.111067 kubelet[3212]: E0117 00:29:08.110970 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.111354 kubelet[3212]: E0117 00:29:08.111226 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.111354 kubelet[3212]: W0117 00:29:08.111241 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.111354 kubelet[3212]: E0117 00:29:08.111255 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.111714 kubelet[3212]: E0117 00:29:08.111624 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.111714 kubelet[3212]: W0117 00:29:08.111638 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.111714 kubelet[3212]: E0117 00:29:08.111651 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.112253 kubelet[3212]: E0117 00:29:08.112173 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.112253 kubelet[3212]: W0117 00:29:08.112190 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.112253 kubelet[3212]: E0117 00:29:08.112205 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.112746 kubelet[3212]: E0117 00:29:08.112716 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.112746 kubelet[3212]: W0117 00:29:08.112731 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.112746 kubelet[3212]: E0117 00:29:08.112745 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.113125 kubelet[3212]: E0117 00:29:08.113033 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.113125 kubelet[3212]: W0117 00:29:08.113045 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.113125 kubelet[3212]: E0117 00:29:08.113058 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.113811 kubelet[3212]: E0117 00:29:08.113788 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.113811 kubelet[3212]: W0117 00:29:08.113807 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.113811 kubelet[3212]: E0117 00:29:08.113822 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.114178 kubelet[3212]: E0117 00:29:08.114081 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.114178 kubelet[3212]: W0117 00:29:08.114095 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.114178 kubelet[3212]: E0117 00:29:08.114109 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.114497 kubelet[3212]: E0117 00:29:08.114479 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.114497 kubelet[3212]: W0117 00:29:08.114494 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.114610 kubelet[3212]: E0117 00:29:08.114509 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.115341 kubelet[3212]: E0117 00:29:08.115251 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.115341 kubelet[3212]: W0117 00:29:08.115267 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.115341 kubelet[3212]: E0117 00:29:08.115282 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.116566 kubelet[3212]: E0117 00:29:08.116538 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.116566 kubelet[3212]: W0117 00:29:08.116556 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.116695 kubelet[3212]: E0117 00:29:08.116571 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.116970 kubelet[3212]: E0117 00:29:08.116946 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.116970 kubelet[3212]: W0117 00:29:08.116963 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.117097 kubelet[3212]: E0117 00:29:08.116977 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.117767 kubelet[3212]: E0117 00:29:08.117290 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.117767 kubelet[3212]: W0117 00:29:08.117317 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.117767 kubelet[3212]: E0117 00:29:08.117330 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.118031 kubelet[3212]: E0117 00:29:08.117838 3212 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:29:08.118031 kubelet[3212]: W0117 00:29:08.117931 3212 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:29:08.118031 kubelet[3212]: E0117 00:29:08.117946 3212 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:29:08.146376 containerd[1707]: time="2026-01-17T00:29:08.146336352Z" level=info msg="CreateContainer within sandbox \"649312ed767913946304d735a6f997f7253556eff645942cd4006b20ff8f5727\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4c74308b54fe94aba6d11d04278a46af993abed5f1e858403f6393409de6f462\"" Jan 17 00:29:08.147073 containerd[1707]: time="2026-01-17T00:29:08.147042568Z" level=info msg="StartContainer for \"4c74308b54fe94aba6d11d04278a46af993abed5f1e858403f6393409de6f462\"" Jan 17 00:29:08.189031 systemd[1]: Started cri-containerd-4c74308b54fe94aba6d11d04278a46af993abed5f1e858403f6393409de6f462.scope - libcontainer container 4c74308b54fe94aba6d11d04278a46af993abed5f1e858403f6393409de6f462. Jan 17 00:29:08.230074 containerd[1707]: time="2026-01-17T00:29:08.230000766Z" level=info msg="StartContainer for \"4c74308b54fe94aba6d11d04278a46af993abed5f1e858403f6393409de6f462\" returns successfully" Jan 17 00:29:08.238465 systemd[1]: cri-containerd-4c74308b54fe94aba6d11d04278a46af993abed5f1e858403f6393409de6f462.scope: Deactivated successfully. Jan 17 00:29:08.264470 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c74308b54fe94aba6d11d04278a46af993abed5f1e858403f6393409de6f462-rootfs.mount: Deactivated successfully. Jan 17 00:29:09.684452 containerd[1707]: time="2026-01-17T00:29:09.684363700Z" level=info msg="shim disconnected" id=4c74308b54fe94aba6d11d04278a46af993abed5f1e858403f6393409de6f462 namespace=k8s.io Jan 17 00:29:09.684452 containerd[1707]: time="2026-01-17T00:29:09.684441701Z" level=warning msg="cleaning up after shim disconnected" id=4c74308b54fe94aba6d11d04278a46af993abed5f1e858403f6393409de6f462 namespace=k8s.io Jan 17 00:29:09.684452 containerd[1707]: time="2026-01-17T00:29:09.684453702Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:29:09.697888 containerd[1707]: time="2026-01-17T00:29:09.697819291Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:29:09Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:29:09.911144 kubelet[3212]: E0117 00:29:09.911080 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:29:10.044956 containerd[1707]: time="2026-01-17T00:29:10.044895417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:29:11.911825 kubelet[3212]: E0117 00:29:11.911515 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:29:13.215482 containerd[1707]: time="2026-01-17T00:29:13.215415759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:13.218299 containerd[1707]: time="2026-01-17T00:29:13.218102317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=70446859" Jan 17 00:29:13.221614 containerd[1707]: time="2026-01-17T00:29:13.221354788Z" level=info msg="ImageCreate event name:\"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:13.225370 containerd[1707]: time="2026-01-17T00:29:13.225329374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:13.226172 containerd[1707]: time="2026-01-17T00:29:13.226122491Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"71941459\" in 3.181177074s" Jan 17 00:29:13.226305 containerd[1707]: time="2026-01-17T00:29:13.226283695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:24e1e7377c738d4080eb462a29e2c6756d383d8d25ad87b7f49165581f20c3cd\"" Jan 17 00:29:13.234500 containerd[1707]: time="2026-01-17T00:29:13.234459972Z" level=info msg="CreateContainer within sandbox \"649312ed767913946304d735a6f997f7253556eff645942cd4006b20ff8f5727\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:29:13.269344 containerd[1707]: time="2026-01-17T00:29:13.269295027Z" level=info msg="CreateContainer within sandbox \"649312ed767913946304d735a6f997f7253556eff645942cd4006b20ff8f5727\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1b0af7c1fbd92adf048f434c03eb5bd388004eeb11ec8e0451f21571cf9aaad4\"" Jan 17 00:29:13.270354 containerd[1707]: time="2026-01-17T00:29:13.270295249Z" level=info msg="StartContainer for \"1b0af7c1fbd92adf048f434c03eb5bd388004eeb11ec8e0451f21571cf9aaad4\"" Jan 17 00:29:13.306194 systemd[1]: run-containerd-runc-k8s.io-1b0af7c1fbd92adf048f434c03eb5bd388004eeb11ec8e0451f21571cf9aaad4-runc.0mYxAM.mount: Deactivated successfully. Jan 17 00:29:13.316040 systemd[1]: Started cri-containerd-1b0af7c1fbd92adf048f434c03eb5bd388004eeb11ec8e0451f21571cf9aaad4.scope - libcontainer container 1b0af7c1fbd92adf048f434c03eb5bd388004eeb11ec8e0451f21571cf9aaad4. Jan 17 00:29:13.352991 containerd[1707]: time="2026-01-17T00:29:13.352928240Z" level=info msg="StartContainer for \"1b0af7c1fbd92adf048f434c03eb5bd388004eeb11ec8e0451f21571cf9aaad4\" returns successfully" Jan 17 00:29:13.910965 kubelet[3212]: E0117 00:29:13.910903 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:29:15.105063 systemd[1]: cri-containerd-1b0af7c1fbd92adf048f434c03eb5bd388004eeb11ec8e0451f21571cf9aaad4.scope: Deactivated successfully. Jan 17 00:29:15.131306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b0af7c1fbd92adf048f434c03eb5bd388004eeb11ec8e0451f21571cf9aaad4-rootfs.mount: Deactivated successfully. Jan 17 00:29:15.138605 kubelet[3212]: I0117 00:29:15.138529 3212 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 17 00:29:16.347087 containerd[1707]: time="2026-01-17T00:29:16.346516363Z" level=info msg="shim disconnected" id=1b0af7c1fbd92adf048f434c03eb5bd388004eeb11ec8e0451f21571cf9aaad4 namespace=k8s.io Jan 17 00:29:16.347087 containerd[1707]: time="2026-01-17T00:29:16.346643565Z" level=warning msg="cleaning up after shim disconnected" id=1b0af7c1fbd92adf048f434c03eb5bd388004eeb11ec8e0451f21571cf9aaad4 namespace=k8s.io Jan 17 00:29:16.347087 containerd[1707]: time="2026-01-17T00:29:16.346659266Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:29:16.348125 systemd[1]: Created slice kubepods-burstable-podd4d1f5cb_6ccd_4c1d_9961_16d6f6063290.slice - libcontainer container kubepods-burstable-podd4d1f5cb_6ccd_4c1d_9961_16d6f6063290.slice. Jan 17 00:29:16.368938 systemd[1]: Created slice kubepods-besteffort-pod47118e25_f9cc_45d1_87d8_eb13465b2075.slice - libcontainer container kubepods-besteffort-pod47118e25_f9cc_45d1_87d8_eb13465b2075.slice. Jan 17 00:29:16.375070 kubelet[3212]: I0117 00:29:16.371957 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4d1f5cb-6ccd-4c1d-9961-16d6f6063290-config-volume\") pod \"coredns-66bc5c9577-2cnwv\" (UID: \"d4d1f5cb-6ccd-4c1d-9961-16d6f6063290\") " pod="kube-system/coredns-66bc5c9577-2cnwv" Jan 17 00:29:16.375070 kubelet[3212]: I0117 00:29:16.372003 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvqq6\" (UniqueName: \"kubernetes.io/projected/d4d1f5cb-6ccd-4c1d-9961-16d6f6063290-kube-api-access-lvqq6\") pod \"coredns-66bc5c9577-2cnwv\" (UID: \"d4d1f5cb-6ccd-4c1d-9961-16d6f6063290\") " pod="kube-system/coredns-66bc5c9577-2cnwv" Jan 17 00:29:16.376397 containerd[1707]: time="2026-01-17T00:29:16.371320790Z" level=warning msg="cleanup warnings time=\"2026-01-17T00:29:16Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 00:29:16.381713 systemd[1]: Created slice kubepods-besteffort-podf761b8ec_f7d8_4ff6_9483_963882f3f6d4.slice - libcontainer container kubepods-besteffort-podf761b8ec_f7d8_4ff6_9483_963882f3f6d4.slice. Jan 17 00:29:16.393834 containerd[1707]: time="2026-01-17T00:29:16.390587799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kvdv,Uid:47118e25-f9cc-45d1-87d8-eb13465b2075,Namespace:calico-system,Attempt:0,}" Jan 17 00:29:16.397052 systemd[1]: Created slice kubepods-besteffort-pod4b45b454_ebe6_4d21_bf83_a7855971fc58.slice - libcontainer container kubepods-besteffort-pod4b45b454_ebe6_4d21_bf83_a7855971fc58.slice. Jan 17 00:29:16.445662 systemd[1]: Created slice kubepods-burstable-poda13ebeb2_eb90_475a_98df_04917f3b6561.slice - libcontainer container kubepods-burstable-poda13ebeb2_eb90_475a_98df_04917f3b6561.slice. Jan 17 00:29:16.454585 systemd[1]: Created slice kubepods-besteffort-pode0b77f88_1fec_4816_9462_8b6fdaf09daf.slice - libcontainer container kubepods-besteffort-pode0b77f88_1fec_4816_9462_8b6fdaf09daf.slice. Jan 17 00:29:16.474866 kubelet[3212]: I0117 00:29:16.473225 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4b45b454-ebe6-4d21-bf83-a7855971fc58-tigera-ca-bundle\") pod \"calico-kube-controllers-bfd8dc5f6-rbjmv\" (UID: \"4b45b454-ebe6-4d21-bf83-a7855971fc58\") " pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" Jan 17 00:29:16.474866 kubelet[3212]: I0117 00:29:16.473280 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a13ebeb2-eb90-475a-98df-04917f3b6561-config-volume\") pod \"coredns-66bc5c9577-9p5lm\" (UID: \"a13ebeb2-eb90-475a-98df-04917f3b6561\") " pod="kube-system/coredns-66bc5c9577-9p5lm" Jan 17 00:29:16.474866 kubelet[3212]: I0117 00:29:16.473315 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfpf6\" (UniqueName: \"kubernetes.io/projected/f761b8ec-f7d8-4ff6-9483-963882f3f6d4-kube-api-access-qfpf6\") pod \"calico-apiserver-6749bfd78c-xh4fx\" (UID: \"f761b8ec-f7d8-4ff6-9483-963882f3f6d4\") " pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" Jan 17 00:29:16.474866 kubelet[3212]: I0117 00:29:16.473365 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/782804bf-2c9e-4b36-ac94-4d730923b45e-calico-apiserver-certs\") pod \"calico-apiserver-6749bfd78c-bw7sp\" (UID: \"782804bf-2c9e-4b36-ac94-4d730923b45e\") " pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" Jan 17 00:29:16.474866 kubelet[3212]: I0117 00:29:16.473396 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dc357ba6-2c61-48b4-b7fe-5c77c584c2d0-calico-apiserver-certs\") pod \"calico-apiserver-bdcd7994c-plxvx\" (UID: \"dc357ba6-2c61-48b4-b7fe-5c77c584c2d0\") " pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" Jan 17 00:29:16.475204 kubelet[3212]: I0117 00:29:16.473435 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b77f88-1fec-4816-9462-8b6fdaf09daf-whisker-ca-bundle\") pod \"whisker-6977ddfdf8-4s6xp\" (UID: \"e0b77f88-1fec-4816-9462-8b6fdaf09daf\") " pod="calico-system/whisker-6977ddfdf8-4s6xp" Jan 17 00:29:16.475204 kubelet[3212]: I0117 00:29:16.473457 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/87d77883-a4c9-44f4-bd4d-b065491724ef-config\") pod \"goldmane-7c778bb748-jvj5r\" (UID: \"87d77883-a4c9-44f4-bd4d-b065491724ef\") " pod="calico-system/goldmane-7c778bb748-jvj5r" Jan 17 00:29:16.475204 kubelet[3212]: I0117 00:29:16.473480 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/87d77883-a4c9-44f4-bd4d-b065491724ef-goldmane-key-pair\") pod \"goldmane-7c778bb748-jvj5r\" (UID: \"87d77883-a4c9-44f4-bd4d-b065491724ef\") " pod="calico-system/goldmane-7c778bb748-jvj5r" Jan 17 00:29:16.475204 kubelet[3212]: I0117 00:29:16.473506 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjvsl\" (UniqueName: \"kubernetes.io/projected/4b45b454-ebe6-4d21-bf83-a7855971fc58-kube-api-access-pjvsl\") pod \"calico-kube-controllers-bfd8dc5f6-rbjmv\" (UID: \"4b45b454-ebe6-4d21-bf83-a7855971fc58\") " pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" Jan 17 00:29:16.475204 kubelet[3212]: I0117 00:29:16.473528 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0b77f88-1fec-4816-9462-8b6fdaf09daf-whisker-backend-key-pair\") pod \"whisker-6977ddfdf8-4s6xp\" (UID: \"e0b77f88-1fec-4816-9462-8b6fdaf09daf\") " pod="calico-system/whisker-6977ddfdf8-4s6xp" Jan 17 00:29:16.475441 kubelet[3212]: I0117 00:29:16.473560 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv698\" (UniqueName: \"kubernetes.io/projected/87d77883-a4c9-44f4-bd4d-b065491724ef-kube-api-access-jv698\") pod \"goldmane-7c778bb748-jvj5r\" (UID: \"87d77883-a4c9-44f4-bd4d-b065491724ef\") " pod="calico-system/goldmane-7c778bb748-jvj5r" Jan 17 00:29:16.475441 kubelet[3212]: I0117 00:29:16.473585 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7g9fl\" (UniqueName: \"kubernetes.io/projected/a13ebeb2-eb90-475a-98df-04917f3b6561-kube-api-access-7g9fl\") pod \"coredns-66bc5c9577-9p5lm\" (UID: \"a13ebeb2-eb90-475a-98df-04917f3b6561\") " pod="kube-system/coredns-66bc5c9577-9p5lm" Jan 17 00:29:16.475441 kubelet[3212]: I0117 00:29:16.473611 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5mr2\" (UniqueName: \"kubernetes.io/projected/dc357ba6-2c61-48b4-b7fe-5c77c584c2d0-kube-api-access-d5mr2\") pod \"calico-apiserver-bdcd7994c-plxvx\" (UID: \"dc357ba6-2c61-48b4-b7fe-5c77c584c2d0\") " pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" Jan 17 00:29:16.475441 kubelet[3212]: I0117 00:29:16.473631 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p4m2\" (UniqueName: \"kubernetes.io/projected/e0b77f88-1fec-4816-9462-8b6fdaf09daf-kube-api-access-8p4m2\") pod \"whisker-6977ddfdf8-4s6xp\" (UID: \"e0b77f88-1fec-4816-9462-8b6fdaf09daf\") " pod="calico-system/whisker-6977ddfdf8-4s6xp" Jan 17 00:29:16.475441 kubelet[3212]: I0117 00:29:16.473657 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/87d77883-a4c9-44f4-bd4d-b065491724ef-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-jvj5r\" (UID: \"87d77883-a4c9-44f4-bd4d-b065491724ef\") " pod="calico-system/goldmane-7c778bb748-jvj5r" Jan 17 00:29:16.475659 kubelet[3212]: I0117 00:29:16.473735 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f761b8ec-f7d8-4ff6-9483-963882f3f6d4-calico-apiserver-certs\") pod \"calico-apiserver-6749bfd78c-xh4fx\" (UID: \"f761b8ec-f7d8-4ff6-9483-963882f3f6d4\") " pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" Jan 17 00:29:16.475659 kubelet[3212]: I0117 00:29:16.473757 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r52w\" (UniqueName: \"kubernetes.io/projected/782804bf-2c9e-4b36-ac94-4d730923b45e-kube-api-access-9r52w\") pod \"calico-apiserver-6749bfd78c-bw7sp\" (UID: \"782804bf-2c9e-4b36-ac94-4d730923b45e\") " pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" Jan 17 00:29:16.475614 systemd[1]: Created slice kubepods-besteffort-pod87d77883_a4c9_44f4_bd4d_b065491724ef.slice - libcontainer container kubepods-besteffort-pod87d77883_a4c9_44f4_bd4d_b065491724ef.slice. Jan 17 00:29:16.493795 systemd[1]: Created slice kubepods-besteffort-pod782804bf_2c9e_4b36_ac94_4d730923b45e.slice - libcontainer container kubepods-besteffort-pod782804bf_2c9e_4b36_ac94_4d730923b45e.slice. Jan 17 00:29:16.520381 systemd[1]: Created slice kubepods-besteffort-poddc357ba6_2c61_48b4_b7fe_5c77c584c2d0.slice - libcontainer container kubepods-besteffort-poddc357ba6_2c61_48b4_b7fe_5c77c584c2d0.slice. Jan 17 00:29:16.559585 containerd[1707]: time="2026-01-17T00:29:16.559527289Z" level=error msg="Failed to destroy network for sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.562189 containerd[1707]: time="2026-01-17T00:29:16.562125545Z" level=error msg="encountered an error cleaning up failed sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.562331 containerd[1707]: time="2026-01-17T00:29:16.562240347Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kvdv,Uid:47118e25-f9cc-45d1-87d8-eb13465b2075,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.562586 kubelet[3212]: E0117 00:29:16.562525 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.562705 kubelet[3212]: E0117 00:29:16.562608 3212 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7kvdv" Jan 17 00:29:16.562705 kubelet[3212]: E0117 00:29:16.562640 3212 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7kvdv" Jan 17 00:29:16.562887 kubelet[3212]: E0117 00:29:16.562717 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7kvdv_calico-system(47118e25-f9cc-45d1-87d8-eb13465b2075)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7kvdv_calico-system(47118e25-f9cc-45d1-87d8-eb13465b2075)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:29:16.564406 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38-shm.mount: Deactivated successfully. Jan 17 00:29:16.683795 containerd[1707]: time="2026-01-17T00:29:16.682200796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2cnwv,Uid:d4d1f5cb-6ccd-4c1d-9961-16d6f6063290,Namespace:kube-system,Attempt:0,}" Jan 17 00:29:16.695372 containerd[1707]: time="2026-01-17T00:29:16.695327075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6749bfd78c-xh4fx,Uid:f761b8ec-f7d8-4ff6-9483-963882f3f6d4,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:29:16.717172 containerd[1707]: time="2026-01-17T00:29:16.717041037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bfd8dc5f6-rbjmv,Uid:4b45b454-ebe6-4d21-bf83-a7855971fc58,Namespace:calico-system,Attempt:0,}" Jan 17 00:29:16.756964 containerd[1707]: time="2026-01-17T00:29:16.756837183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9p5lm,Uid:a13ebeb2-eb90-475a-98df-04917f3b6561,Namespace:kube-system,Attempt:0,}" Jan 17 00:29:16.770701 containerd[1707]: time="2026-01-17T00:29:16.770585175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6977ddfdf8-4s6xp,Uid:e0b77f88-1fec-4816-9462-8b6fdaf09daf,Namespace:calico-system,Attempt:0,}" Jan 17 00:29:16.799212 containerd[1707]: time="2026-01-17T00:29:16.798897976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jvj5r,Uid:87d77883-a4c9-44f4-bd4d-b065491724ef,Namespace:calico-system,Attempt:0,}" Jan 17 00:29:16.820783 containerd[1707]: time="2026-01-17T00:29:16.820731040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6749bfd78c-bw7sp,Uid:782804bf-2c9e-4b36-ac94-4d730923b45e,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:29:16.832764 containerd[1707]: time="2026-01-17T00:29:16.832712995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bdcd7994c-plxvx,Uid:dc357ba6-2c61-48b4-b7fe-5c77c584c2d0,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:29:16.877868 containerd[1707]: time="2026-01-17T00:29:16.877656950Z" level=error msg="Failed to destroy network for sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.878076 containerd[1707]: time="2026-01-17T00:29:16.878039658Z" level=error msg="encountered an error cleaning up failed sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.878277 containerd[1707]: time="2026-01-17T00:29:16.878159461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6749bfd78c-xh4fx,Uid:f761b8ec-f7d8-4ff6-9483-963882f3f6d4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.878431 containerd[1707]: time="2026-01-17T00:29:16.878399966Z" level=error msg="Failed to destroy network for sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.879676 containerd[1707]: time="2026-01-17T00:29:16.878813975Z" level=error msg="encountered an error cleaning up failed sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.879676 containerd[1707]: time="2026-01-17T00:29:16.878928077Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2cnwv,Uid:d4d1f5cb-6ccd-4c1d-9961-16d6f6063290,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.880653 kubelet[3212]: E0117 00:29:16.880598 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.880765 kubelet[3212]: E0117 00:29:16.880694 3212 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2cnwv" Jan 17 00:29:16.880765 kubelet[3212]: E0117 00:29:16.880722 3212 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-2cnwv" Jan 17 00:29:16.880894 kubelet[3212]: E0117 00:29:16.880795 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-2cnwv_kube-system(d4d1f5cb-6ccd-4c1d-9961-16d6f6063290)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-2cnwv_kube-system(d4d1f5cb-6ccd-4c1d-9961-16d6f6063290)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2cnwv" podUID="d4d1f5cb-6ccd-4c1d-9961-16d6f6063290" Jan 17 00:29:16.885360 kubelet[3212]: E0117 00:29:16.885292 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:16.885478 kubelet[3212]: E0117 00:29:16.885364 3212 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" Jan 17 00:29:16.885478 kubelet[3212]: E0117 00:29:16.885399 3212 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" Jan 17 00:29:16.885583 kubelet[3212]: E0117 00:29:16.885489 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6749bfd78c-xh4fx_calico-apiserver(f761b8ec-f7d8-4ff6-9483-963882f3f6d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6749bfd78c-xh4fx_calico-apiserver(f761b8ec-f7d8-4ff6-9483-963882f3f6d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:29:17.034515 containerd[1707]: time="2026-01-17T00:29:17.034452282Z" level=error msg="Failed to destroy network for sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.037578 containerd[1707]: time="2026-01-17T00:29:17.037289743Z" level=error msg="encountered an error cleaning up failed sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.038232 containerd[1707]: time="2026-01-17T00:29:17.037979057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bfd8dc5f6-rbjmv,Uid:4b45b454-ebe6-4d21-bf83-a7855971fc58,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.041076 kubelet[3212]: E0117 00:29:17.040691 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.041076 kubelet[3212]: E0117 00:29:17.040777 3212 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" Jan 17 00:29:17.041076 kubelet[3212]: E0117 00:29:17.040823 3212 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" Jan 17 00:29:17.041292 kubelet[3212]: E0117 00:29:17.040919 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-bfd8dc5f6-rbjmv_calico-system(4b45b454-ebe6-4d21-bf83-a7855971fc58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-bfd8dc5f6-rbjmv_calico-system(4b45b454-ebe6-4d21-bf83-a7855971fc58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:29:17.047639 containerd[1707]: time="2026-01-17T00:29:17.047115851Z" level=error msg="Failed to destroy network for sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.049091 containerd[1707]: time="2026-01-17T00:29:17.048141973Z" level=error msg="encountered an error cleaning up failed sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.049091 containerd[1707]: time="2026-01-17T00:29:17.048397379Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9p5lm,Uid:a13ebeb2-eb90-475a-98df-04917f3b6561,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.050396 kubelet[3212]: E0117 00:29:17.050347 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.050500 kubelet[3212]: E0117 00:29:17.050424 3212 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9p5lm" Jan 17 00:29:17.050500 kubelet[3212]: E0117 00:29:17.050449 3212 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-9p5lm" Jan 17 00:29:17.050608 kubelet[3212]: E0117 00:29:17.050520 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-9p5lm_kube-system(a13ebeb2-eb90-475a-98df-04917f3b6561)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-9p5lm_kube-system(a13ebeb2-eb90-475a-98df-04917f3b6561)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9p5lm" podUID="a13ebeb2-eb90-475a-98df-04917f3b6561" Jan 17 00:29:17.071263 kubelet[3212]: I0117 00:29:17.071208 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:17.072133 containerd[1707]: time="2026-01-17T00:29:17.072090482Z" level=info msg="StopPodSandbox for \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\"" Jan 17 00:29:17.072379 containerd[1707]: time="2026-01-17T00:29:17.072349888Z" level=info msg="Ensure that sandbox 67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba in task-service has been cleanup successfully" Jan 17 00:29:17.076640 kubelet[3212]: I0117 00:29:17.075764 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:17.078323 containerd[1707]: time="2026-01-17T00:29:17.078288214Z" level=info msg="StopPodSandbox for \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\"" Jan 17 00:29:17.078874 containerd[1707]: time="2026-01-17T00:29:17.078498718Z" level=info msg="Ensure that sandbox f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408 in task-service has been cleanup successfully" Jan 17 00:29:17.084886 kubelet[3212]: I0117 00:29:17.084725 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:17.085700 containerd[1707]: time="2026-01-17T00:29:17.085669171Z" level=info msg="StopPodSandbox for \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\"" Jan 17 00:29:17.087184 containerd[1707]: time="2026-01-17T00:29:17.086560490Z" level=info msg="Ensure that sandbox 6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38 in task-service has been cleanup successfully" Jan 17 00:29:17.092866 kubelet[3212]: I0117 00:29:17.091951 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:17.097153 containerd[1707]: time="2026-01-17T00:29:17.096107793Z" level=info msg="StopPodSandbox for \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\"" Jan 17 00:29:17.097153 containerd[1707]: time="2026-01-17T00:29:17.096382398Z" level=info msg="Ensure that sandbox a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd in task-service has been cleanup successfully" Jan 17 00:29:17.102489 kubelet[3212]: I0117 00:29:17.102460 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:17.104971 containerd[1707]: time="2026-01-17T00:29:17.104141263Z" level=info msg="StopPodSandbox for \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\"" Jan 17 00:29:17.110544 containerd[1707]: time="2026-01-17T00:29:17.108029946Z" level=info msg="Ensure that sandbox 14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5 in task-service has been cleanup successfully" Jan 17 00:29:17.147922 containerd[1707]: time="2026-01-17T00:29:17.147832992Z" level=error msg="Failed to destroy network for sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.148632 containerd[1707]: time="2026-01-17T00:29:17.148581908Z" level=error msg="encountered an error cleaning up failed sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.152909 containerd[1707]: time="2026-01-17T00:29:17.152869599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6977ddfdf8-4s6xp,Uid:e0b77f88-1fec-4816-9462-8b6fdaf09daf,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.153489 kubelet[3212]: E0117 00:29:17.153445 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.153705 kubelet[3212]: E0117 00:29:17.153512 3212 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6977ddfdf8-4s6xp" Jan 17 00:29:17.153705 kubelet[3212]: E0117 00:29:17.153540 3212 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6977ddfdf8-4s6xp" Jan 17 00:29:17.153705 kubelet[3212]: E0117 00:29:17.153604 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6977ddfdf8-4s6xp_calico-system(e0b77f88-1fec-4816-9462-8b6fdaf09daf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6977ddfdf8-4s6xp_calico-system(e0b77f88-1fec-4816-9462-8b6fdaf09daf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6977ddfdf8-4s6xp" podUID="e0b77f88-1fec-4816-9462-8b6fdaf09daf" Jan 17 00:29:17.161351 containerd[1707]: time="2026-01-17T00:29:17.161311278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:29:17.250450 containerd[1707]: time="2026-01-17T00:29:17.250337570Z" level=error msg="Failed to destroy network for sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.251828 containerd[1707]: time="2026-01-17T00:29:17.251520695Z" level=error msg="encountered an error cleaning up failed sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.252048 containerd[1707]: time="2026-01-17T00:29:17.251821402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6749bfd78c-bw7sp,Uid:782804bf-2c9e-4b36-ac94-4d730923b45e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.252486 kubelet[3212]: E0117 00:29:17.252359 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.252486 kubelet[3212]: E0117 00:29:17.252452 3212 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" Jan 17 00:29:17.252990 kubelet[3212]: E0117 00:29:17.252492 3212 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" Jan 17 00:29:17.252990 kubelet[3212]: E0117 00:29:17.252599 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6749bfd78c-bw7sp_calico-apiserver(782804bf-2c9e-4b36-ac94-4d730923b45e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6749bfd78c-bw7sp_calico-apiserver(782804bf-2c9e-4b36-ac94-4d730923b45e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:29:17.278804 containerd[1707]: time="2026-01-17T00:29:17.278727073Z" level=error msg="Failed to destroy network for sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.279878 containerd[1707]: time="2026-01-17T00:29:17.279209784Z" level=error msg="encountered an error cleaning up failed sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.279878 containerd[1707]: time="2026-01-17T00:29:17.279298386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jvj5r,Uid:87d77883-a4c9-44f4-bd4d-b065491724ef,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.280109 kubelet[3212]: E0117 00:29:17.279616 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.280109 kubelet[3212]: E0117 00:29:17.279691 3212 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-jvj5r" Jan 17 00:29:17.280109 kubelet[3212]: E0117 00:29:17.279724 3212 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-jvj5r" Jan 17 00:29:17.280266 kubelet[3212]: E0117 00:29:17.279799 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-jvj5r_calico-system(87d77883-a4c9-44f4-bd4d-b065491724ef)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-jvj5r_calico-system(87d77883-a4c9-44f4-bd4d-b065491724ef)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:29:17.302888 containerd[1707]: time="2026-01-17T00:29:17.301005647Z" level=error msg="StopPodSandbox for \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\" failed" error="failed to destroy network for sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.303045 kubelet[3212]: E0117 00:29:17.301392 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:17.303045 kubelet[3212]: E0117 00:29:17.301486 3212 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba"} Jan 17 00:29:17.303045 kubelet[3212]: E0117 00:29:17.301888 3212 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4b45b454-ebe6-4d21-bf83-a7855971fc58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:29:17.303045 kubelet[3212]: E0117 00:29:17.301943 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4b45b454-ebe6-4d21-bf83-a7855971fc58\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:29:17.318737 containerd[1707]: time="2026-01-17T00:29:17.318094910Z" level=error msg="Failed to destroy network for sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.318737 containerd[1707]: time="2026-01-17T00:29:17.318341815Z" level=error msg="StopPodSandbox for \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\" failed" error="failed to destroy network for sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.319039 kubelet[3212]: E0117 00:29:17.318620 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:17.319039 kubelet[3212]: E0117 00:29:17.318687 3212 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5"} Jan 17 00:29:17.319039 kubelet[3212]: E0117 00:29:17.318748 3212 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f761b8ec-f7d8-4ff6-9483-963882f3f6d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:29:17.319039 kubelet[3212]: E0117 00:29:17.318787 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f761b8ec-f7d8-4ff6-9483-963882f3f6d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:29:17.319303 containerd[1707]: time="2026-01-17T00:29:17.319238834Z" level=error msg="encountered an error cleaning up failed sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.319362 containerd[1707]: time="2026-01-17T00:29:17.319311536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bdcd7994c-plxvx,Uid:dc357ba6-2c61-48b4-b7fe-5c77c584c2d0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.320332 kubelet[3212]: E0117 00:29:17.319596 3212 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.320332 kubelet[3212]: E0117 00:29:17.319648 3212 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" Jan 17 00:29:17.320332 kubelet[3212]: E0117 00:29:17.319671 3212 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" Jan 17 00:29:17.320528 kubelet[3212]: E0117 00:29:17.319732 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-bdcd7994c-plxvx_calico-apiserver(dc357ba6-2c61-48b4-b7fe-5c77c584c2d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-bdcd7994c-plxvx_calico-apiserver(dc357ba6-2c61-48b4-b7fe-5c77c584c2d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:29:17.322020 containerd[1707]: time="2026-01-17T00:29:17.321761088Z" level=error msg="StopPodSandbox for \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\" failed" error="failed to destroy network for sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.323210 kubelet[3212]: E0117 00:29:17.322320 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:17.323210 kubelet[3212]: E0117 00:29:17.322385 3212 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408"} Jan 17 00:29:17.323210 kubelet[3212]: E0117 00:29:17.322423 3212 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4d1f5cb-6ccd-4c1d-9961-16d6f6063290\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:29:17.323210 kubelet[3212]: E0117 00:29:17.323165 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4d1f5cb-6ccd-4c1d-9961-16d6f6063290\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-2cnwv" podUID="d4d1f5cb-6ccd-4c1d-9961-16d6f6063290" Jan 17 00:29:17.326202 containerd[1707]: time="2026-01-17T00:29:17.326159081Z" level=error msg="StopPodSandbox for \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\" failed" error="failed to destroy network for sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.326383 kubelet[3212]: E0117 00:29:17.326341 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:17.326461 kubelet[3212]: E0117 00:29:17.326385 3212 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38"} Jan 17 00:29:17.326461 kubelet[3212]: E0117 00:29:17.326418 3212 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47118e25-f9cc-45d1-87d8-eb13465b2075\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:29:17.326591 kubelet[3212]: E0117 00:29:17.326454 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47118e25-f9cc-45d1-87d8-eb13465b2075\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:29:17.328729 containerd[1707]: time="2026-01-17T00:29:17.328694735Z" level=error msg="StopPodSandbox for \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\" failed" error="failed to destroy network for sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:17.328934 kubelet[3212]: E0117 00:29:17.328878 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:17.328934 kubelet[3212]: E0117 00:29:17.328913 3212 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd"} Jan 17 00:29:17.329096 kubelet[3212]: E0117 00:29:17.328950 3212 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a13ebeb2-eb90-475a-98df-04917f3b6561\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:29:17.329096 kubelet[3212]: E0117 00:29:17.328980 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a13ebeb2-eb90-475a-98df-04917f3b6561\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-9p5lm" podUID="a13ebeb2-eb90-475a-98df-04917f3b6561" Jan 17 00:29:18.148907 kubelet[3212]: I0117 00:29:18.148835 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:18.150580 containerd[1707]: time="2026-01-17T00:29:18.149947288Z" level=info msg="StopPodSandbox for \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\"" Jan 17 00:29:18.150580 containerd[1707]: time="2026-01-17T00:29:18.150218694Z" level=info msg="Ensure that sandbox d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade in task-service has been cleanup successfully" Jan 17 00:29:18.152999 kubelet[3212]: I0117 00:29:18.152968 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:18.155608 containerd[1707]: time="2026-01-17T00:29:18.155166999Z" level=info msg="StopPodSandbox for \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\"" Jan 17 00:29:18.155608 containerd[1707]: time="2026-01-17T00:29:18.155365703Z" level=info msg="Ensure that sandbox 24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8 in task-service has been cleanup successfully" Jan 17 00:29:18.158916 kubelet[3212]: I0117 00:29:18.158888 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:18.159760 containerd[1707]: time="2026-01-17T00:29:18.159728796Z" level=info msg="StopPodSandbox for \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\"" Jan 17 00:29:18.164251 containerd[1707]: time="2026-01-17T00:29:18.164214591Z" level=info msg="Ensure that sandbox 0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200 in task-service has been cleanup successfully" Jan 17 00:29:18.165963 kubelet[3212]: I0117 00:29:18.165545 3212 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:18.170188 containerd[1707]: time="2026-01-17T00:29:18.170161218Z" level=info msg="StopPodSandbox for \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\"" Jan 17 00:29:18.170577 containerd[1707]: time="2026-01-17T00:29:18.170550726Z" level=info msg="Ensure that sandbox 41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d in task-service has been cleanup successfully" Jan 17 00:29:18.247996 containerd[1707]: time="2026-01-17T00:29:18.247927370Z" level=error msg="StopPodSandbox for \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\" failed" error="failed to destroy network for sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:18.248320 kubelet[3212]: E0117 00:29:18.248253 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:18.248320 kubelet[3212]: E0117 00:29:18.248310 3212 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200"} Jan 17 00:29:18.248690 kubelet[3212]: E0117 00:29:18.248356 3212 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e0b77f88-1fec-4816-9462-8b6fdaf09daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:29:18.248690 kubelet[3212]: E0117 00:29:18.248639 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e0b77f88-1fec-4816-9462-8b6fdaf09daf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6977ddfdf8-4s6xp" podUID="e0b77f88-1fec-4816-9462-8b6fdaf09daf" Jan 17 00:29:18.256759 containerd[1707]: time="2026-01-17T00:29:18.256549154Z" level=error msg="StopPodSandbox for \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\" failed" error="failed to destroy network for sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:18.257333 kubelet[3212]: E0117 00:29:18.256778 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:18.257333 kubelet[3212]: E0117 00:29:18.256840 3212 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8"} Jan 17 00:29:18.257333 kubelet[3212]: E0117 00:29:18.256891 3212 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87d77883-a4c9-44f4-bd4d-b065491724ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:29:18.257333 kubelet[3212]: E0117 00:29:18.256925 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87d77883-a4c9-44f4-bd4d-b065491724ef\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:29:18.263097 containerd[1707]: time="2026-01-17T00:29:18.262985990Z" level=error msg="StopPodSandbox for \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\" failed" error="failed to destroy network for sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:18.263226 kubelet[3212]: E0117 00:29:18.263191 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:18.263323 kubelet[3212]: E0117 00:29:18.263234 3212 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d"} Jan 17 00:29:18.263390 kubelet[3212]: E0117 00:29:18.263341 3212 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dc357ba6-2c61-48b4-b7fe-5c77c584c2d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:29:18.263497 kubelet[3212]: E0117 00:29:18.263396 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dc357ba6-2c61-48b4-b7fe-5c77c584c2d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:29:18.264464 containerd[1707]: time="2026-01-17T00:29:18.264423521Z" level=error msg="StopPodSandbox for \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\" failed" error="failed to destroy network for sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:29:18.264777 kubelet[3212]: E0117 00:29:18.264737 3212 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:18.264777 kubelet[3212]: E0117 00:29:18.264780 3212 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade"} Jan 17 00:29:18.265002 kubelet[3212]: E0117 00:29:18.264811 3212 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"782804bf-2c9e-4b36-ac94-4d730923b45e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:29:18.265002 kubelet[3212]: E0117 00:29:18.264853 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"782804bf-2c9e-4b36-ac94-4d730923b45e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:29:23.330327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079712394.mount: Deactivated successfully. Jan 17 00:29:23.368465 containerd[1707]: time="2026-01-17T00:29:23.368403669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:23.371413 containerd[1707]: time="2026-01-17T00:29:23.371232029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=156883675" Jan 17 00:29:23.374788 containerd[1707]: time="2026-01-17T00:29:23.374719903Z" level=info msg="ImageCreate event name:\"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:23.381641 containerd[1707]: time="2026-01-17T00:29:23.381569048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:29:23.382365 containerd[1707]: time="2026-01-17T00:29:23.382190861Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"156883537\" in 6.218294928s" Jan 17 00:29:23.382365 containerd[1707]: time="2026-01-17T00:29:23.382235262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:833e8e11d9dc187377eab6f31e275114a6b0f8f0afc3bf578a2a00507e85afc9\"" Jan 17 00:29:23.403995 containerd[1707]: time="2026-01-17T00:29:23.403831420Z" level=info msg="CreateContainer within sandbox \"649312ed767913946304d735a6f997f7253556eff645942cd4006b20ff8f5727\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:29:23.441118 containerd[1707]: time="2026-01-17T00:29:23.441064010Z" level=info msg="CreateContainer within sandbox \"649312ed767913946304d735a6f997f7253556eff645942cd4006b20ff8f5727\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8e04231e283e94714f3bfd7195c487b8545949453031dbcb3518090e0a361325\"" Jan 17 00:29:23.442193 containerd[1707]: time="2026-01-17T00:29:23.442155433Z" level=info msg="StartContainer for \"8e04231e283e94714f3bfd7195c487b8545949453031dbcb3518090e0a361325\"" Jan 17 00:29:23.478030 systemd[1]: Started cri-containerd-8e04231e283e94714f3bfd7195c487b8545949453031dbcb3518090e0a361325.scope - libcontainer container 8e04231e283e94714f3bfd7195c487b8545949453031dbcb3518090e0a361325. Jan 17 00:29:23.511893 containerd[1707]: time="2026-01-17T00:29:23.511730208Z" level=info msg="StartContainer for \"8e04231e283e94714f3bfd7195c487b8545949453031dbcb3518090e0a361325\" returns successfully" Jan 17 00:29:23.634343 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:29:23.634502 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:29:23.738623 containerd[1707]: time="2026-01-17T00:29:23.737865502Z" level=info msg="StopPodSandbox for \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\"" Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.833 [INFO][4492] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.833 [INFO][4492] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" iface="eth0" netns="/var/run/netns/cni-4566c9a4-106c-81d2-4dcf-fcaf6144bc88" Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.837 [INFO][4492] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" iface="eth0" netns="/var/run/netns/cni-4566c9a4-106c-81d2-4dcf-fcaf6144bc88" Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.837 [INFO][4492] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" iface="eth0" netns="/var/run/netns/cni-4566c9a4-106c-81d2-4dcf-fcaf6144bc88" Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.837 [INFO][4492] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.837 [INFO][4492] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.870 [INFO][4506] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" HandleID="k8s-pod-network.0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--6977ddfdf8--4s6xp-eth0" Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.871 [INFO][4506] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.871 [INFO][4506] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.879 [WARNING][4506] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" HandleID="k8s-pod-network.0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--6977ddfdf8--4s6xp-eth0" Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.879 [INFO][4506] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" HandleID="k8s-pod-network.0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--6977ddfdf8--4s6xp-eth0" Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.881 [INFO][4506] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:23.887079 containerd[1707]: 2026-01-17 00:29:23.884 [INFO][4492] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:23.887809 containerd[1707]: time="2026-01-17T00:29:23.887761080Z" level=info msg="TearDown network for sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\" successfully" Jan 17 00:29:23.887809 containerd[1707]: time="2026-01-17T00:29:23.887807681Z" level=info msg="StopPodSandbox for \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\" returns successfully" Jan 17 00:29:24.034477 kubelet[3212]: I0117 00:29:24.034064 3212 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b77f88-1fec-4816-9462-8b6fdaf09daf-whisker-ca-bundle\") pod \"e0b77f88-1fec-4816-9462-8b6fdaf09daf\" (UID: \"e0b77f88-1fec-4816-9462-8b6fdaf09daf\") " Jan 17 00:29:24.034477 kubelet[3212]: I0117 00:29:24.034125 3212 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0b77f88-1fec-4816-9462-8b6fdaf09daf-whisker-backend-key-pair\") pod \"e0b77f88-1fec-4816-9462-8b6fdaf09daf\" (UID: \"e0b77f88-1fec-4816-9462-8b6fdaf09daf\") " Jan 17 00:29:24.035052 kubelet[3212]: I0117 00:29:24.034519 3212 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e0b77f88-1fec-4816-9462-8b6fdaf09daf-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e0b77f88-1fec-4816-9462-8b6fdaf09daf" (UID: "e0b77f88-1fec-4816-9462-8b6fdaf09daf"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:29:24.035109 kubelet[3212]: I0117 00:29:24.035051 3212 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8p4m2\" (UniqueName: \"kubernetes.io/projected/e0b77f88-1fec-4816-9462-8b6fdaf09daf-kube-api-access-8p4m2\") pod \"e0b77f88-1fec-4816-9462-8b6fdaf09daf\" (UID: \"e0b77f88-1fec-4816-9462-8b6fdaf09daf\") " Jan 17 00:29:24.036880 kubelet[3212]: I0117 00:29:24.035172 3212 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0b77f88-1fec-4816-9462-8b6fdaf09daf-whisker-ca-bundle\") on node \"ci-4081.3.6-n-c809bb5d02\" DevicePath \"\"" Jan 17 00:29:24.038780 kubelet[3212]: I0117 00:29:24.038723 3212 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0b77f88-1fec-4816-9462-8b6fdaf09daf-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e0b77f88-1fec-4816-9462-8b6fdaf09daf" (UID: "e0b77f88-1fec-4816-9462-8b6fdaf09daf"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:29:24.039723 kubelet[3212]: I0117 00:29:24.039695 3212 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0b77f88-1fec-4816-9462-8b6fdaf09daf-kube-api-access-8p4m2" (OuterVolumeSpecName: "kube-api-access-8p4m2") pod "e0b77f88-1fec-4816-9462-8b6fdaf09daf" (UID: "e0b77f88-1fec-4816-9462-8b6fdaf09daf"). InnerVolumeSpecName "kube-api-access-8p4m2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:29:24.136348 kubelet[3212]: I0117 00:29:24.136289 3212 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0b77f88-1fec-4816-9462-8b6fdaf09daf-whisker-backend-key-pair\") on node \"ci-4081.3.6-n-c809bb5d02\" DevicePath \"\"" Jan 17 00:29:24.136348 kubelet[3212]: I0117 00:29:24.136338 3212 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8p4m2\" (UniqueName: \"kubernetes.io/projected/e0b77f88-1fec-4816-9462-8b6fdaf09daf-kube-api-access-8p4m2\") on node \"ci-4081.3.6-n-c809bb5d02\" DevicePath \"\"" Jan 17 00:29:24.189756 systemd[1]: Removed slice kubepods-besteffort-pode0b77f88_1fec_4816_9462_8b6fdaf09daf.slice - libcontainer container kubepods-besteffort-pode0b77f88_1fec_4816_9462_8b6fdaf09daf.slice. Jan 17 00:29:24.215566 kubelet[3212]: I0117 00:29:24.213649 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-b5k92" podStartSLOduration=1.679353854 podStartE2EDuration="20.213627489s" podCreationTimestamp="2026-01-17 00:29:04 +0000 UTC" firstStartedPulling="2026-01-17 00:29:04.84905205 +0000 UTC m=+22.047488021" lastFinishedPulling="2026-01-17 00:29:23.383325685 +0000 UTC m=+40.581761656" observedRunningTime="2026-01-17 00:29:24.211772349 +0000 UTC m=+41.410208320" watchObservedRunningTime="2026-01-17 00:29:24.213627489 +0000 UTC m=+41.412063660" Jan 17 00:29:24.329717 systemd[1]: run-netns-cni\x2d4566c9a4\x2d106c\x2d81d2\x2d4dcf\x2dfcaf6144bc88.mount: Deactivated successfully. Jan 17 00:29:24.335016 systemd[1]: var-lib-kubelet-pods-e0b77f88\x2d1fec\x2d4816\x2d9462\x2d8b6fdaf09daf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8p4m2.mount: Deactivated successfully. Jan 17 00:29:24.335118 systemd[1]: var-lib-kubelet-pods-e0b77f88\x2d1fec\x2d4816\x2d9462\x2d8b6fdaf09daf-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:29:24.347547 systemd[1]: Created slice kubepods-besteffort-podedba1a23_88e2_404b_a56f_6999060e2565.slice - libcontainer container kubepods-besteffort-podedba1a23_88e2_404b_a56f_6999060e2565.slice. Jan 17 00:29:24.438872 kubelet[3212]: I0117 00:29:24.438662 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/edba1a23-88e2-404b-a56f-6999060e2565-whisker-backend-key-pair\") pod \"whisker-5dbd59f56d-n649m\" (UID: \"edba1a23-88e2-404b-a56f-6999060e2565\") " pod="calico-system/whisker-5dbd59f56d-n649m" Jan 17 00:29:24.438872 kubelet[3212]: I0117 00:29:24.438735 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/edba1a23-88e2-404b-a56f-6999060e2565-whisker-ca-bundle\") pod \"whisker-5dbd59f56d-n649m\" (UID: \"edba1a23-88e2-404b-a56f-6999060e2565\") " pod="calico-system/whisker-5dbd59f56d-n649m" Jan 17 00:29:24.438872 kubelet[3212]: I0117 00:29:24.438778 3212 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g79ph\" (UniqueName: \"kubernetes.io/projected/edba1a23-88e2-404b-a56f-6999060e2565-kube-api-access-g79ph\") pod \"whisker-5dbd59f56d-n649m\" (UID: \"edba1a23-88e2-404b-a56f-6999060e2565\") " pod="calico-system/whisker-5dbd59f56d-n649m" Jan 17 00:29:24.656869 containerd[1707]: time="2026-01-17T00:29:24.656246673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dbd59f56d-n649m,Uid:edba1a23-88e2-404b-a56f-6999060e2565,Namespace:calico-system,Attempt:0,}" Jan 17 00:29:24.804131 systemd-networkd[1582]: cali9324739d80b: Link UP Jan 17 00:29:24.804389 systemd-networkd[1582]: cali9324739d80b: Gained carrier Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.716 [INFO][4551] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.728 [INFO][4551] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0 whisker-5dbd59f56d- calico-system edba1a23-88e2-404b-a56f-6999060e2565 916 0 2026-01-17 00:29:24 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5dbd59f56d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-n-c809bb5d02 whisker-5dbd59f56d-n649m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9324739d80b [] [] }} ContainerID="e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" Namespace="calico-system" Pod="whisker-5dbd59f56d-n649m" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.728 [INFO][4551] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" Namespace="calico-system" Pod="whisker-5dbd59f56d-n649m" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.758 [INFO][4563] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" HandleID="k8s-pod-network.e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.758 [INFO][4563] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" HandleID="k8s-pod-network.e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f7f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-c809bb5d02", "pod":"whisker-5dbd59f56d-n649m", "timestamp":"2026-01-17 00:29:24.75802453 +0000 UTC"}, Hostname:"ci-4081.3.6-n-c809bb5d02", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.758 [INFO][4563] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.758 [INFO][4563] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.758 [INFO][4563] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-c809bb5d02' Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.765 [INFO][4563] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.768 [INFO][4563] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.772 [INFO][4563] ipam/ipam.go 511: Trying affinity for 192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.773 [INFO][4563] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.775 [INFO][4563] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.775 [INFO][4563] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.776 [INFO][4563] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073 Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.785 [INFO][4563] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.790 [INFO][4563] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.193/26] block=192.168.61.192/26 handle="k8s-pod-network.e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.790 [INFO][4563] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.193/26] handle="k8s-pod-network.e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.790 [INFO][4563] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:24.827690 containerd[1707]: 2026-01-17 00:29:24.790 [INFO][4563] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.193/26] IPv6=[] ContainerID="e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" HandleID="k8s-pod-network.e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0" Jan 17 00:29:24.828597 containerd[1707]: 2026-01-17 00:29:24.792 [INFO][4551] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" Namespace="calico-system" Pod="whisker-5dbd59f56d-n649m" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0", GenerateName:"whisker-5dbd59f56d-", Namespace:"calico-system", SelfLink:"", UID:"edba1a23-88e2-404b-a56f-6999060e2565", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dbd59f56d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"", Pod:"whisker-5dbd59f56d-n649m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.61.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9324739d80b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:24.828597 containerd[1707]: 2026-01-17 00:29:24.792 [INFO][4551] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.193/32] ContainerID="e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" Namespace="calico-system" Pod="whisker-5dbd59f56d-n649m" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0" Jan 17 00:29:24.828597 containerd[1707]: 2026-01-17 00:29:24.792 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9324739d80b ContainerID="e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" Namespace="calico-system" Pod="whisker-5dbd59f56d-n649m" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0" Jan 17 00:29:24.828597 containerd[1707]: 2026-01-17 00:29:24.806 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" Namespace="calico-system" Pod="whisker-5dbd59f56d-n649m" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0" Jan 17 00:29:24.828597 containerd[1707]: 2026-01-17 00:29:24.806 [INFO][4551] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" Namespace="calico-system" Pod="whisker-5dbd59f56d-n649m" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0", GenerateName:"whisker-5dbd59f56d-", Namespace:"calico-system", SelfLink:"", UID:"edba1a23-88e2-404b-a56f-6999060e2565", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5dbd59f56d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073", Pod:"whisker-5dbd59f56d-n649m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.61.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9324739d80b", MAC:"8a:d1:8d:da:cc:d1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:24.828597 containerd[1707]: 2026-01-17 00:29:24.822 [INFO][4551] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073" Namespace="calico-system" Pod="whisker-5dbd59f56d-n649m" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-whisker--5dbd59f56d--n649m-eth0" Jan 17 00:29:24.849253 containerd[1707]: time="2026-01-17T00:29:24.849065161Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:24.850185 containerd[1707]: time="2026-01-17T00:29:24.850010381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:24.850185 containerd[1707]: time="2026-01-17T00:29:24.850036181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:24.850185 containerd[1707]: time="2026-01-17T00:29:24.850141083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:24.872158 systemd[1]: Started cri-containerd-e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073.scope - libcontainer container e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073. Jan 17 00:29:24.916181 kubelet[3212]: I0117 00:29:24.915903 3212 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0b77f88-1fec-4816-9462-8b6fdaf09daf" path="/var/lib/kubelet/pods/e0b77f88-1fec-4816-9462-8b6fdaf09daf/volumes" Jan 17 00:29:24.920010 containerd[1707]: time="2026-01-17T00:29:24.919962264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5dbd59f56d-n649m,Uid:edba1a23-88e2-404b-a56f-6999060e2565,Namespace:calico-system,Attempt:0,} returns sandbox id \"e6b3e499dfda1c5d3a34fdf28a08247572b1fdcd4596a529e89af7801f512073\"" Jan 17 00:29:24.922109 containerd[1707]: time="2026-01-17T00:29:24.922022607Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:29:25.162102 containerd[1707]: time="2026-01-17T00:29:25.162033496Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:25.165460 containerd[1707]: time="2026-01-17T00:29:25.165376667Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:29:25.166017 containerd[1707]: time="2026-01-17T00:29:25.165393267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:29:25.166103 kubelet[3212]: E0117 00:29:25.165782 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:29:25.166103 kubelet[3212]: E0117 00:29:25.165867 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:29:25.166103 kubelet[3212]: E0117 00:29:25.165990 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5dbd59f56d-n649m_calico-system(edba1a23-88e2-404b-a56f-6999060e2565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:25.168561 containerd[1707]: time="2026-01-17T00:29:25.168449532Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:29:25.421404 containerd[1707]: time="2026-01-17T00:29:25.421109489Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:25.431757 containerd[1707]: time="2026-01-17T00:29:25.431137501Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:29:25.432268 containerd[1707]: time="2026-01-17T00:29:25.432227024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:29:25.432793 kubelet[3212]: E0117 00:29:25.432643 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:29:25.432959 kubelet[3212]: E0117 00:29:25.432818 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:29:25.433122 kubelet[3212]: E0117 00:29:25.433071 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5dbd59f56d-n649m_calico-system(edba1a23-88e2-404b-a56f-6999060e2565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:25.433181 kubelet[3212]: E0117 00:29:25.433133 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:29:25.549916 kernel: bpftool[4758]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:29:25.825426 systemd-networkd[1582]: vxlan.calico: Link UP Jan 17 00:29:25.825439 systemd-networkd[1582]: vxlan.calico: Gained carrier Jan 17 00:29:26.204709 kubelet[3212]: E0117 00:29:26.204488 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:29:26.713259 systemd-networkd[1582]: cali9324739d80b: Gained IPv6LL Jan 17 00:29:27.544043 systemd-networkd[1582]: vxlan.calico: Gained IPv6LL Jan 17 00:29:28.913475 containerd[1707]: time="2026-01-17T00:29:28.913011920Z" level=info msg="StopPodSandbox for \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\"" Jan 17 00:29:28.915771 containerd[1707]: time="2026-01-17T00:29:28.913508731Z" level=info msg="StopPodSandbox for \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\"" Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:28.980 [INFO][4854] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:28.981 [INFO][4854] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" iface="eth0" netns="/var/run/netns/cni-37644d7a-1944-8fac-1a95-3c56492b6945" Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:28.982 [INFO][4854] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" iface="eth0" netns="/var/run/netns/cni-37644d7a-1944-8fac-1a95-3c56492b6945" Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:28.982 [INFO][4854] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" iface="eth0" netns="/var/run/netns/cni-37644d7a-1944-8fac-1a95-3c56492b6945" Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:28.982 [INFO][4854] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:28.982 [INFO][4854] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:29.025 [INFO][4867] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" HandleID="k8s-pod-network.a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:29.025 [INFO][4867] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:29.025 [INFO][4867] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:29.035 [WARNING][4867] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" HandleID="k8s-pod-network.a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:29.035 [INFO][4867] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" HandleID="k8s-pod-network.a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:29.036 [INFO][4867] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:29.043885 containerd[1707]: 2026-01-17 00:29:29.039 [INFO][4854] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:29.045442 containerd[1707]: time="2026-01-17T00:29:29.044874116Z" level=info msg="TearDown network for sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\" successfully" Jan 17 00:29:29.045442 containerd[1707]: time="2026-01-17T00:29:29.044924117Z" level=info msg="StopPodSandbox for \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\" returns successfully" Jan 17 00:29:29.048454 systemd[1]: run-netns-cni\x2d37644d7a\x2d1944\x2d8fac\x2d1a95\x2d3c56492b6945.mount: Deactivated successfully. Jan 17 00:29:29.053442 containerd[1707]: time="2026-01-17T00:29:29.053385896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9p5lm,Uid:a13ebeb2-eb90-475a-98df-04917f3b6561,Namespace:kube-system,Attempt:1,}" Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:28.983 [INFO][4855] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:28.983 [INFO][4855] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" iface="eth0" netns="/var/run/netns/cni-cb965493-35e5-8f76-3173-fe79c764497a" Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:28.983 [INFO][4855] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" iface="eth0" netns="/var/run/netns/cni-cb965493-35e5-8f76-3173-fe79c764497a" Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:28.988 [INFO][4855] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" iface="eth0" netns="/var/run/netns/cni-cb965493-35e5-8f76-3173-fe79c764497a" Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:28.989 [INFO][4855] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:28.989 [INFO][4855] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:29.034 [INFO][4872] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" HandleID="k8s-pod-network.14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:29.034 [INFO][4872] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:29.036 [INFO][4872] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:29.050 [WARNING][4872] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" HandleID="k8s-pod-network.14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:29.050 [INFO][4872] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" HandleID="k8s-pod-network.14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:29.052 [INFO][4872] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:29.058155 containerd[1707]: 2026-01-17 00:29:29.056 [INFO][4855] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:29.058694 containerd[1707]: time="2026-01-17T00:29:29.058307301Z" level=info msg="TearDown network for sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\" successfully" Jan 17 00:29:29.058694 containerd[1707]: time="2026-01-17T00:29:29.058336701Z" level=info msg="StopPodSandbox for \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\" returns successfully" Jan 17 00:29:29.063671 systemd[1]: run-netns-cni\x2dcb965493\x2d35e5\x2d8f76\x2d3173\x2dfe79c764497a.mount: Deactivated successfully. Jan 17 00:29:29.065501 containerd[1707]: time="2026-01-17T00:29:29.065457052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6749bfd78c-xh4fx,Uid:f761b8ec-f7d8-4ff6-9483-963882f3f6d4,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:29:29.562460 systemd-networkd[1582]: calid2e1f851abb: Link UP Jan 17 00:29:29.564965 systemd-networkd[1582]: calid2e1f851abb: Gained carrier Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.467 [INFO][4881] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0 coredns-66bc5c9577- kube-system a13ebeb2-eb90-475a-98df-04917f3b6561 945 0 2026-01-17 00:28:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-c809bb5d02 coredns-66bc5c9577-9p5lm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid2e1f851abb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" Namespace="kube-system" Pod="coredns-66bc5c9577-9p5lm" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.467 [INFO][4881] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" Namespace="kube-system" Pod="coredns-66bc5c9577-9p5lm" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.503 [INFO][4894] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" HandleID="k8s-pod-network.6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.504 [INFO][4894] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" HandleID="k8s-pod-network.6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024efe0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-c809bb5d02", "pod":"coredns-66bc5c9577-9p5lm", "timestamp":"2026-01-17 00:29:29.503732544 +0000 UTC"}, Hostname:"ci-4081.3.6-n-c809bb5d02", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.504 [INFO][4894] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.504 [INFO][4894] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.504 [INFO][4894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-c809bb5d02' Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.513 [INFO][4894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.518 [INFO][4894] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.524 [INFO][4894] ipam/ipam.go 511: Trying affinity for 192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.529 [INFO][4894] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.532 [INFO][4894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.532 [INFO][4894] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.534 [INFO][4894] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573 Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.541 [INFO][4894] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.551 [INFO][4894] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.194/26] block=192.168.61.192/26 handle="k8s-pod-network.6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.552 [INFO][4894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.194/26] handle="k8s-pod-network.6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.552 [INFO][4894] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:29.604365 containerd[1707]: 2026-01-17 00:29:29.552 [INFO][4894] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.194/26] IPv6=[] ContainerID="6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" HandleID="k8s-pod-network.6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:29.606092 containerd[1707]: 2026-01-17 00:29:29.555 [INFO][4881] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" Namespace="kube-system" Pod="coredns-66bc5c9577-9p5lm" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a13ebeb2-eb90-475a-98df-04917f3b6561", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"", Pod:"coredns-66bc5c9577-9p5lm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2e1f851abb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:29.606092 containerd[1707]: 2026-01-17 00:29:29.555 [INFO][4881] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.194/32] ContainerID="6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" Namespace="kube-system" Pod="coredns-66bc5c9577-9p5lm" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:29.606092 containerd[1707]: 2026-01-17 00:29:29.556 [INFO][4881] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid2e1f851abb ContainerID="6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" Namespace="kube-system" Pod="coredns-66bc5c9577-9p5lm" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:29.606092 containerd[1707]: 2026-01-17 00:29:29.565 [INFO][4881] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" Namespace="kube-system" Pod="coredns-66bc5c9577-9p5lm" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:29.606092 containerd[1707]: 2026-01-17 00:29:29.566 [INFO][4881] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" Namespace="kube-system" Pod="coredns-66bc5c9577-9p5lm" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a13ebeb2-eb90-475a-98df-04917f3b6561", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573", Pod:"coredns-66bc5c9577-9p5lm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2e1f851abb", MAC:"e2:2a:95:be:d4:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:29.606667 containerd[1707]: 2026-01-17 00:29:29.593 [INFO][4881] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573" Namespace="kube-system" Pod="coredns-66bc5c9577-9p5lm" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:29.649328 containerd[1707]: time="2026-01-17T00:29:29.649174328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:29.649549 containerd[1707]: time="2026-01-17T00:29:29.649296530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:29.649651 containerd[1707]: time="2026-01-17T00:29:29.649577536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:29.649765 containerd[1707]: time="2026-01-17T00:29:29.649719739Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:29.663975 systemd-networkd[1582]: cali2078bbd6c9c: Link UP Jan 17 00:29:29.666994 systemd-networkd[1582]: cali2078bbd6c9c: Gained carrier Jan 17 00:29:29.700890 systemd[1]: Started cri-containerd-6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573.scope - libcontainer container 6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573. Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.533 [INFO][4899] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0 calico-apiserver-6749bfd78c- calico-apiserver f761b8ec-f7d8-4ff6-9483-963882f3f6d4 946 0 2026-01-17 00:28:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6749bfd78c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-c809bb5d02 calico-apiserver-6749bfd78c-xh4fx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2078bbd6c9c [] [] }} ContainerID="496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-xh4fx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.534 [INFO][4899] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-xh4fx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.592 [INFO][4913] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" HandleID="k8s-pod-network.496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.594 [INFO][4913] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" HandleID="k8s-pod-network.496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-c809bb5d02", "pod":"calico-apiserver-6749bfd78c-xh4fx", "timestamp":"2026-01-17 00:29:29.592451425 +0000 UTC"}, Hostname:"ci-4081.3.6-n-c809bb5d02", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.594 [INFO][4913] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.594 [INFO][4913] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.594 [INFO][4913] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-c809bb5d02' Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.614 [INFO][4913] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.619 [INFO][4913] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.625 [INFO][4913] ipam/ipam.go 511: Trying affinity for 192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.627 [INFO][4913] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.631 [INFO][4913] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.631 [INFO][4913] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.633 [INFO][4913] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4 Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.642 [INFO][4913] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.655 [INFO][4913] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.195/26] block=192.168.61.192/26 handle="k8s-pod-network.496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.655 [INFO][4913] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.195/26] handle="k8s-pod-network.496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.655 [INFO][4913] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:29.704163 containerd[1707]: 2026-01-17 00:29:29.655 [INFO][4913] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.195/26] IPv6=[] ContainerID="496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" HandleID="k8s-pod-network.496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:29.705085 containerd[1707]: 2026-01-17 00:29:29.657 [INFO][4899] cni-plugin/k8s.go 418: Populated endpoint ContainerID="496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-xh4fx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0", GenerateName:"calico-apiserver-6749bfd78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f761b8ec-f7d8-4ff6-9483-963882f3f6d4", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6749bfd78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"", Pod:"calico-apiserver-6749bfd78c-xh4fx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2078bbd6c9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:29.705085 containerd[1707]: 2026-01-17 00:29:29.657 [INFO][4899] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.195/32] ContainerID="496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-xh4fx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:29.705085 containerd[1707]: 2026-01-17 00:29:29.657 [INFO][4899] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2078bbd6c9c ContainerID="496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-xh4fx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:29.705085 containerd[1707]: 2026-01-17 00:29:29.670 [INFO][4899] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-xh4fx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:29.705085 containerd[1707]: 2026-01-17 00:29:29.671 [INFO][4899] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-xh4fx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0", GenerateName:"calico-apiserver-6749bfd78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f761b8ec-f7d8-4ff6-9483-963882f3f6d4", ResourceVersion:"946", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6749bfd78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4", Pod:"calico-apiserver-6749bfd78c-xh4fx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2078bbd6c9c", MAC:"5e:50:6c:4e:5e:6a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:29.705085 containerd[1707]: 2026-01-17 00:29:29.696 [INFO][4899] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-xh4fx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:29.765150 containerd[1707]: time="2026-01-17T00:29:29.765042384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:29.765543 containerd[1707]: time="2026-01-17T00:29:29.765172587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:29.765543 containerd[1707]: time="2026-01-17T00:29:29.765194988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:29.765543 containerd[1707]: time="2026-01-17T00:29:29.765288390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:29.806855 containerd[1707]: time="2026-01-17T00:29:29.806620566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-9p5lm,Uid:a13ebeb2-eb90-475a-98df-04917f3b6561,Namespace:kube-system,Attempt:1,} returns sandbox id \"6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573\"" Jan 17 00:29:29.812042 systemd[1]: Started cri-containerd-496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4.scope - libcontainer container 496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4. Jan 17 00:29:29.828803 containerd[1707]: time="2026-01-17T00:29:29.828289825Z" level=info msg="CreateContainer within sandbox \"6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:29:29.914620 containerd[1707]: time="2026-01-17T00:29:29.914252748Z" level=info msg="StopPodSandbox for \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\"" Jan 17 00:29:29.915999 containerd[1707]: time="2026-01-17T00:29:29.915816781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6749bfd78c-xh4fx,Uid:f761b8ec-f7d8-4ff6-9483-963882f3f6d4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4\"" Jan 17 00:29:29.920643 containerd[1707]: time="2026-01-17T00:29:29.920617183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:29.968 [INFO][5029] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:29.968 [INFO][5029] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" iface="eth0" netns="/var/run/netns/cni-6cfff278-686d-2596-ab69-59c5134e4a75" Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:29.968 [INFO][5029] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" iface="eth0" netns="/var/run/netns/cni-6cfff278-686d-2596-ab69-59c5134e4a75" Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:29.969 [INFO][5029] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" iface="eth0" netns="/var/run/netns/cni-6cfff278-686d-2596-ab69-59c5134e4a75" Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:29.969 [INFO][5029] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:29.969 [INFO][5029] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:30.000 [INFO][5037] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" HandleID="k8s-pod-network.6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:30.034 [INFO][5037] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:30.035 [INFO][5037] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:30.051 [WARNING][5037] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" HandleID="k8s-pod-network.6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:30.051 [INFO][5037] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" HandleID="k8s-pod-network.6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:30.055 [INFO][5037] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:30.061070 containerd[1707]: 2026-01-17 00:29:30.058 [INFO][5029] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:30.063347 containerd[1707]: time="2026-01-17T00:29:30.062146083Z" level=info msg="TearDown network for sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\" successfully" Jan 17 00:29:30.063347 containerd[1707]: time="2026-01-17T00:29:30.062219385Z" level=info msg="StopPodSandbox for \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\" returns successfully" Jan 17 00:29:30.067634 systemd[1]: run-netns-cni\x2d6cfff278\x2d686d\x2d2596\x2dab69\x2d59c5134e4a75.mount: Deactivated successfully. Jan 17 00:29:30.130119 containerd[1707]: time="2026-01-17T00:29:30.129954021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kvdv,Uid:47118e25-f9cc-45d1-87d8-eb13465b2075,Namespace:calico-system,Attempt:1,}" Jan 17 00:29:30.182161 containerd[1707]: time="2026-01-17T00:29:30.182098526Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:30.225336 containerd[1707]: time="2026-01-17T00:29:30.224589727Z" level=info msg="CreateContainer within sandbox \"6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b74a6082dc006bb0c0a38c9dafe502a0b3dfe6681f627c99720a877a88a9b5f2\"" Jan 17 00:29:30.225883 containerd[1707]: time="2026-01-17T00:29:30.225832454Z" level=info msg="StartContainer for \"b74a6082dc006bb0c0a38c9dafe502a0b3dfe6681f627c99720a877a88a9b5f2\"" Jan 17 00:29:30.268025 systemd[1]: Started cri-containerd-b74a6082dc006bb0c0a38c9dafe502a0b3dfe6681f627c99720a877a88a9b5f2.scope - libcontainer container b74a6082dc006bb0c0a38c9dafe502a0b3dfe6681f627c99720a877a88a9b5f2. Jan 17 00:29:30.335137 containerd[1707]: time="2026-01-17T00:29:30.334575859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:29:30.335896 containerd[1707]: time="2026-01-17T00:29:30.335747484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:30.337910 kubelet[3212]: E0117 00:29:30.337801 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:30.338556 kubelet[3212]: E0117 00:29:30.338079 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:30.338705 kubelet[3212]: E0117 00:29:30.338667 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6749bfd78c-xh4fx_calico-apiserver(f761b8ec-f7d8-4ff6-9483-963882f3f6d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:30.340977 containerd[1707]: time="2026-01-17T00:29:30.340200078Z" level=info msg="StartContainer for \"b74a6082dc006bb0c0a38c9dafe502a0b3dfe6681f627c99720a877a88a9b5f2\" returns successfully" Jan 17 00:29:30.341076 kubelet[3212]: E0117 00:29:30.338916 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:29:30.644425 systemd-networkd[1582]: cali05defc2f6b9: Link UP Jan 17 00:29:30.645962 systemd-networkd[1582]: cali05defc2f6b9: Gained carrier Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.564 [INFO][5079] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0 csi-node-driver- calico-system 47118e25-f9cc-45d1-87d8-eb13465b2075 956 0 2026-01-17 00:29:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-n-c809bb5d02 csi-node-driver-7kvdv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali05defc2f6b9 [] [] }} ContainerID="e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" Namespace="calico-system" Pod="csi-node-driver-7kvdv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.564 [INFO][5079] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" Namespace="calico-system" Pod="csi-node-driver-7kvdv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.592 [INFO][5092] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" HandleID="k8s-pod-network.e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.592 [INFO][5092] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" HandleID="k8s-pod-network.e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c5660), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-c809bb5d02", "pod":"csi-node-driver-7kvdv", "timestamp":"2026-01-17 00:29:30.592728832 +0000 UTC"}, Hostname:"ci-4081.3.6-n-c809bb5d02", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.593 [INFO][5092] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.593 [INFO][5092] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.593 [INFO][5092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-c809bb5d02' Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.599 [INFO][5092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.603 [INFO][5092] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.607 [INFO][5092] ipam/ipam.go 511: Trying affinity for 192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.609 [INFO][5092] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.611 [INFO][5092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.611 [INFO][5092] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.612 [INFO][5092] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.623 [INFO][5092] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.632 [INFO][5092] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.196/26] block=192.168.61.192/26 handle="k8s-pod-network.e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.632 [INFO][5092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.196/26] handle="k8s-pod-network.e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.632 [INFO][5092] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:30.665188 containerd[1707]: 2026-01-17 00:29:30.632 [INFO][5092] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.196/26] IPv6=[] ContainerID="e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" HandleID="k8s-pod-network.e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:30.666167 containerd[1707]: 2026-01-17 00:29:30.637 [INFO][5079] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" Namespace="calico-system" Pod="csi-node-driver-7kvdv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47118e25-f9cc-45d1-87d8-eb13465b2075", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"", Pod:"csi-node-driver-7kvdv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05defc2f6b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:30.666167 containerd[1707]: 2026-01-17 00:29:30.637 [INFO][5079] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.196/32] ContainerID="e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" Namespace="calico-system" Pod="csi-node-driver-7kvdv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:30.666167 containerd[1707]: 2026-01-17 00:29:30.637 [INFO][5079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05defc2f6b9 ContainerID="e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" Namespace="calico-system" Pod="csi-node-driver-7kvdv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:30.666167 containerd[1707]: 2026-01-17 00:29:30.645 [INFO][5079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" Namespace="calico-system" Pod="csi-node-driver-7kvdv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:30.666167 containerd[1707]: 2026-01-17 00:29:30.646 [INFO][5079] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" Namespace="calico-system" Pod="csi-node-driver-7kvdv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47118e25-f9cc-45d1-87d8-eb13465b2075", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e", Pod:"csi-node-driver-7kvdv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05defc2f6b9", MAC:"aa:c7:91:c7:6b:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:30.666167 containerd[1707]: 2026-01-17 00:29:30.662 [INFO][5079] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e" Namespace="calico-system" Pod="csi-node-driver-7kvdv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:30.697818 containerd[1707]: time="2026-01-17T00:29:30.697437752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:30.697818 containerd[1707]: time="2026-01-17T00:29:30.697539954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:30.697818 containerd[1707]: time="2026-01-17T00:29:30.697562755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:30.697818 containerd[1707]: time="2026-01-17T00:29:30.697665857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:30.719033 systemd[1]: Started cri-containerd-e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e.scope - libcontainer container e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e. Jan 17 00:29:30.768566 containerd[1707]: time="2026-01-17T00:29:30.768491058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7kvdv,Uid:47118e25-f9cc-45d1-87d8-eb13465b2075,Namespace:calico-system,Attempt:1,} returns sandbox id \"e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e\"" Jan 17 00:29:30.772977 containerd[1707]: time="2026-01-17T00:29:30.772932853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:29:30.912859 containerd[1707]: time="2026-01-17T00:29:30.911697995Z" level=info msg="StopPodSandbox for \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\"" Jan 17 00:29:30.918213 containerd[1707]: time="2026-01-17T00:29:30.914768760Z" level=info msg="StopPodSandbox for \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\"" Jan 17 00:29:30.918213 containerd[1707]: time="2026-01-17T00:29:30.915362372Z" level=info msg="StopPodSandbox for \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\"" Jan 17 00:29:31.025666 containerd[1707]: time="2026-01-17T00:29:31.025053201Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:31.029759 containerd[1707]: time="2026-01-17T00:29:31.029663200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:29:31.030193 containerd[1707]: time="2026-01-17T00:29:31.030020407Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:29:31.031678 kubelet[3212]: E0117 00:29:31.031429 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:29:31.031678 kubelet[3212]: E0117 00:29:31.031628 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:29:31.032159 kubelet[3212]: E0117 00:29:31.032047 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7kvdv_calico-system(47118e25-f9cc-45d1-87d8-eb13465b2075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:31.033967 containerd[1707]: time="2026-01-17T00:29:31.033689786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.024 [INFO][5176] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.026 [INFO][5176] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" iface="eth0" netns="/var/run/netns/cni-207a0f1a-9cf4-3f2f-3420-c5405d1d65df" Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.027 [INFO][5176] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" iface="eth0" netns="/var/run/netns/cni-207a0f1a-9cf4-3f2f-3420-c5405d1d65df" Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.028 [INFO][5176] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" iface="eth0" netns="/var/run/netns/cni-207a0f1a-9cf4-3f2f-3420-c5405d1d65df" Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.028 [INFO][5176] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.028 [INFO][5176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.090 [INFO][5198] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" HandleID="k8s-pod-network.24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.091 [INFO][5198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.091 [INFO][5198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.100 [WARNING][5198] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" HandleID="k8s-pod-network.24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.101 [INFO][5198] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" HandleID="k8s-pod-network.24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.103 [INFO][5198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:31.120723 containerd[1707]: 2026-01-17 00:29:31.110 [INFO][5176] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:31.125012 containerd[1707]: time="2026-01-17T00:29:31.124001922Z" level=info msg="TearDown network for sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\" successfully" Jan 17 00:29:31.125012 containerd[1707]: time="2026-01-17T00:29:31.124056023Z" level=info msg="StopPodSandbox for \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\" returns successfully" Jan 17 00:29:31.125950 systemd[1]: run-netns-cni\x2d207a0f1a\x2d9cf4\x2d3f2f\x2d3420\x2dc5405d1d65df.mount: Deactivated successfully. Jan 17 00:29:31.133956 containerd[1707]: time="2026-01-17T00:29:31.133905934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jvj5r,Uid:87d77883-a4c9-44f4-bd4d-b065491724ef,Namespace:calico-system,Attempt:1,}" Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.019 [INFO][5168] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.019 [INFO][5168] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" iface="eth0" netns="/var/run/netns/cni-e7d833aa-86f8-981b-314e-23d63620b873" Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.020 [INFO][5168] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" iface="eth0" netns="/var/run/netns/cni-e7d833aa-86f8-981b-314e-23d63620b873" Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.021 [INFO][5168] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" iface="eth0" netns="/var/run/netns/cni-e7d833aa-86f8-981b-314e-23d63620b873" Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.021 [INFO][5168] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.021 [INFO][5168] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.103 [INFO][5194] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" HandleID="k8s-pod-network.f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.108 [INFO][5194] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.108 [INFO][5194] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.129 [WARNING][5194] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" HandleID="k8s-pod-network.f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.130 [INFO][5194] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" HandleID="k8s-pod-network.f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.133 [INFO][5194] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:31.137920 containerd[1707]: 2026-01-17 00:29:31.135 [INFO][5168] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:31.142838 containerd[1707]: time="2026-01-17T00:29:31.138972543Z" level=info msg="TearDown network for sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\" successfully" Jan 17 00:29:31.142838 containerd[1707]: time="2026-01-17T00:29:31.139025944Z" level=info msg="StopPodSandbox for \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\" returns successfully" Jan 17 00:29:31.147543 systemd[1]: run-netns-cni\x2de7d833aa\x2d86f8\x2d981b\x2d314e\x2d23d63620b873.mount: Deactivated successfully. Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.024 [INFO][5175] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.024 [INFO][5175] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" iface="eth0" netns="/var/run/netns/cni-c62c00c4-b594-01f1-2bdc-c4bf43604834" Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.025 [INFO][5175] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" iface="eth0" netns="/var/run/netns/cni-c62c00c4-b594-01f1-2bdc-c4bf43604834" Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.027 [INFO][5175] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" iface="eth0" netns="/var/run/netns/cni-c62c00c4-b594-01f1-2bdc-c4bf43604834" Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.027 [INFO][5175] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.028 [INFO][5175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.115 [INFO][5196] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" HandleID="k8s-pod-network.67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.117 [INFO][5196] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.133 [INFO][5196] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.153 [WARNING][5196] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" HandleID="k8s-pod-network.67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.153 [INFO][5196] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" HandleID="k8s-pod-network.67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.156 [INFO][5196] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:31.164625 containerd[1707]: 2026-01-17 00:29:31.160 [INFO][5175] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:31.166433 containerd[1707]: time="2026-01-17T00:29:31.165731916Z" level=info msg="TearDown network for sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\" successfully" Jan 17 00:29:31.166433 containerd[1707]: time="2026-01-17T00:29:31.165803818Z" level=info msg="StopPodSandbox for \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\" returns successfully" Jan 17 00:29:31.171732 systemd[1]: run-netns-cni\x2dc62c00c4\x2db594\x2d01f1\x2d2bdc\x2dc4bf43604834.mount: Deactivated successfully. Jan 17 00:29:31.178342 containerd[1707]: time="2026-01-17T00:29:31.178196284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2cnwv,Uid:d4d1f5cb-6ccd-4c1d-9961-16d6f6063290,Namespace:kube-system,Attempt:1,}" Jan 17 00:29:31.192514 systemd-networkd[1582]: calid2e1f851abb: Gained IPv6LL Jan 17 00:29:31.229073 kubelet[3212]: E0117 00:29:31.228768 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:29:31.262238 kubelet[3212]: I0117 00:29:31.261251 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-9p5lm" podStartSLOduration=43.261223163 podStartE2EDuration="43.261223163s" podCreationTimestamp="2026-01-17 00:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:29:31.246410646 +0000 UTC m=+48.444846717" watchObservedRunningTime="2026-01-17 00:29:31.261223163 +0000 UTC m=+48.459659134" Jan 17 00:29:31.286581 containerd[1707]: time="2026-01-17T00:29:31.286523906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bfd8dc5f6-rbjmv,Uid:4b45b454-ebe6-4d21-bf83-a7855971fc58,Namespace:calico-system,Attempt:1,}" Jan 17 00:29:31.300087 containerd[1707]: time="2026-01-17T00:29:31.299984194Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:31.476424 containerd[1707]: time="2026-01-17T00:29:31.476117870Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:29:31.476424 containerd[1707]: time="2026-01-17T00:29:31.476265173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:29:31.477885 kubelet[3212]: E0117 00:29:31.476812 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:29:31.477885 kubelet[3212]: E0117 00:29:31.476909 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:29:31.477885 kubelet[3212]: E0117 00:29:31.477009 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7kvdv_calico-system(47118e25-f9cc-45d1-87d8-eb13465b2075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:31.478436 kubelet[3212]: E0117 00:29:31.477067 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:29:31.704314 systemd-networkd[1582]: cali2078bbd6c9c: Gained IPv6LL Jan 17 00:29:31.911907 containerd[1707]: time="2026-01-17T00:29:31.911454402Z" level=info msg="StopPodSandbox for \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\"" Jan 17 00:29:31.961181 systemd-networkd[1582]: cali05defc2f6b9: Gained IPv6LL Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:31.960 [INFO][5233] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:31.960 [INFO][5233] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" iface="eth0" netns="/var/run/netns/cni-870eb06b-5dcf-a668-663a-a246eae1eeee" Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:31.961 [INFO][5233] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" iface="eth0" netns="/var/run/netns/cni-870eb06b-5dcf-a668-663a-a246eae1eeee" Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:31.962 [INFO][5233] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" iface="eth0" netns="/var/run/netns/cni-870eb06b-5dcf-a668-663a-a246eae1eeee" Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:31.962 [INFO][5233] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:31.962 [INFO][5233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:31.996 [INFO][5241] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" HandleID="k8s-pod-network.41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:31.996 [INFO][5241] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:31.996 [INFO][5241] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:32.004 [WARNING][5241] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" HandleID="k8s-pod-network.41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:32.004 [INFO][5241] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" HandleID="k8s-pod-network.41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:32.005 [INFO][5241] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:32.010497 containerd[1707]: 2026-01-17 00:29:32.008 [INFO][5233] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:32.016393 containerd[1707]: time="2026-01-17T00:29:32.013472889Z" level=info msg="TearDown network for sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\" successfully" Jan 17 00:29:32.016393 containerd[1707]: time="2026-01-17T00:29:32.013524490Z" level=info msg="StopPodSandbox for \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\" returns successfully" Jan 17 00:29:32.017125 systemd[1]: run-netns-cni\x2d870eb06b\x2d5dcf\x2da668\x2d663a\x2da246eae1eeee.mount: Deactivated successfully. Jan 17 00:29:32.034985 containerd[1707]: time="2026-01-17T00:29:32.034421238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bdcd7994c-plxvx,Uid:dc357ba6-2c61-48b4-b7fe-5c77c584c2d0,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:29:32.243034 kubelet[3212]: E0117 00:29:32.242664 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:29:32.330593 systemd-networkd[1582]: cali1fba24b2fad: Link UP Jan 17 00:29:32.330924 systemd-networkd[1582]: cali1fba24b2fad: Gained carrier Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.222 [INFO][5247] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0 goldmane-7c778bb748- calico-system 87d77883-a4c9-44f4-bd4d-b065491724ef 975 0 2026-01-17 00:29:02 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-n-c809bb5d02 goldmane-7c778bb748-jvj5r eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1fba24b2fad [] [] }} ContainerID="0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" Namespace="calico-system" Pod="goldmane-7c778bb748-jvj5r" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.222 [INFO][5247] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" Namespace="calico-system" Pod="goldmane-7c778bb748-jvj5r" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.262 [INFO][5260] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" HandleID="k8s-pod-network.0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.263 [INFO][5260] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" HandleID="k8s-pod-network.0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d55d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-c809bb5d02", "pod":"goldmane-7c778bb748-jvj5r", "timestamp":"2026-01-17 00:29:32.262887135 +0000 UTC"}, Hostname:"ci-4081.3.6-n-c809bb5d02", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.263 [INFO][5260] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.263 [INFO][5260] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.263 [INFO][5260] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-c809bb5d02' Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.272 [INFO][5260] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.276 [INFO][5260] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.286 [INFO][5260] ipam/ipam.go 511: Trying affinity for 192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.294 [INFO][5260] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.303 [INFO][5260] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.303 [INFO][5260] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.305 [INFO][5260] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32 Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.313 [INFO][5260] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.322 [INFO][5260] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.197/26] block=192.168.61.192/26 handle="k8s-pod-network.0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.323 [INFO][5260] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.197/26] handle="k8s-pod-network.0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.323 [INFO][5260] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:32.365451 containerd[1707]: 2026-01-17 00:29:32.323 [INFO][5260] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.197/26] IPv6=[] ContainerID="0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" HandleID="k8s-pod-network.0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:32.366257 containerd[1707]: 2026-01-17 00:29:32.326 [INFO][5247] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" Namespace="calico-system" Pod="goldmane-7c778bb748-jvj5r" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87d77883-a4c9-44f4-bd4d-b065491724ef", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"", Pod:"goldmane-7c778bb748-jvj5r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1fba24b2fad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:32.366257 containerd[1707]: 2026-01-17 00:29:32.326 [INFO][5247] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.197/32] ContainerID="0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" Namespace="calico-system" Pod="goldmane-7c778bb748-jvj5r" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:32.366257 containerd[1707]: 2026-01-17 00:29:32.326 [INFO][5247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1fba24b2fad ContainerID="0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" Namespace="calico-system" Pod="goldmane-7c778bb748-jvj5r" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:32.366257 containerd[1707]: 2026-01-17 00:29:32.332 [INFO][5247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" Namespace="calico-system" Pod="goldmane-7c778bb748-jvj5r" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:32.366257 containerd[1707]: 2026-01-17 00:29:32.332 [INFO][5247] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" Namespace="calico-system" Pod="goldmane-7c778bb748-jvj5r" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87d77883-a4c9-44f4-bd4d-b065491724ef", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32", Pod:"goldmane-7c778bb748-jvj5r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1fba24b2fad", MAC:"96:ac:71:22:4a:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:32.366257 containerd[1707]: 2026-01-17 00:29:32.359 [INFO][5247] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32" Namespace="calico-system" Pod="goldmane-7c778bb748-jvj5r" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:32.502604 systemd-networkd[1582]: cali60605130bbc: Link UP Jan 17 00:29:32.504143 systemd-networkd[1582]: cali60605130bbc: Gained carrier Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.429 [INFO][5276] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0 calico-kube-controllers-bfd8dc5f6- calico-system 4b45b454-ebe6-4d21-bf83-a7855971fc58 976 0 2026-01-17 00:29:04 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:bfd8dc5f6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-n-c809bb5d02 calico-kube-controllers-bfd8dc5f6-rbjmv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali60605130bbc [] [] }} ContainerID="abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" Namespace="calico-system" Pod="calico-kube-controllers-bfd8dc5f6-rbjmv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.429 [INFO][5276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" Namespace="calico-system" Pod="calico-kube-controllers-bfd8dc5f6-rbjmv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.458 [INFO][5290] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" HandleID="k8s-pod-network.abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.458 [INFO][5290] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" HandleID="k8s-pod-network.abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-n-c809bb5d02", "pod":"calico-kube-controllers-bfd8dc5f6-rbjmv", "timestamp":"2026-01-17 00:29:32.458402126 +0000 UTC"}, Hostname:"ci-4081.3.6-n-c809bb5d02", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.458 [INFO][5290] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.458 [INFO][5290] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.458 [INFO][5290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-c809bb5d02' Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.466 [INFO][5290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.470 [INFO][5290] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.473 [INFO][5290] ipam/ipam.go 511: Trying affinity for 192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.476 [INFO][5290] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.478 [INFO][5290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.478 [INFO][5290] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.479 [INFO][5290] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13 Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.485 [INFO][5290] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.494 [INFO][5290] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.198/26] block=192.168.61.192/26 handle="k8s-pod-network.abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.494 [INFO][5290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.198/26] handle="k8s-pod-network.abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.494 [INFO][5290] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:32.524836 containerd[1707]: 2026-01-17 00:29:32.495 [INFO][5290] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.198/26] IPv6=[] ContainerID="abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" HandleID="k8s-pod-network.abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:32.526173 containerd[1707]: 2026-01-17 00:29:32.496 [INFO][5276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" Namespace="calico-system" Pod="calico-kube-controllers-bfd8dc5f6-rbjmv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0", GenerateName:"calico-kube-controllers-bfd8dc5f6-", Namespace:"calico-system", SelfLink:"", UID:"4b45b454-ebe6-4d21-bf83-a7855971fc58", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bfd8dc5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"", Pod:"calico-kube-controllers-bfd8dc5f6-rbjmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali60605130bbc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:32.526173 containerd[1707]: 2026-01-17 00:29:32.496 [INFO][5276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.198/32] ContainerID="abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" Namespace="calico-system" Pod="calico-kube-controllers-bfd8dc5f6-rbjmv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:32.526173 containerd[1707]: 2026-01-17 00:29:32.496 [INFO][5276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60605130bbc ContainerID="abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" Namespace="calico-system" Pod="calico-kube-controllers-bfd8dc5f6-rbjmv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:32.526173 containerd[1707]: 2026-01-17 00:29:32.505 [INFO][5276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" Namespace="calico-system" Pod="calico-kube-controllers-bfd8dc5f6-rbjmv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:32.526173 containerd[1707]: 2026-01-17 00:29:32.506 [INFO][5276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" Namespace="calico-system" Pod="calico-kube-controllers-bfd8dc5f6-rbjmv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0", GenerateName:"calico-kube-controllers-bfd8dc5f6-", Namespace:"calico-system", SelfLink:"", UID:"4b45b454-ebe6-4d21-bf83-a7855971fc58", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bfd8dc5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13", Pod:"calico-kube-controllers-bfd8dc5f6-rbjmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali60605130bbc", MAC:"96:6f:43:3f:b8:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:32.526173 containerd[1707]: 2026-01-17 00:29:32.520 [INFO][5276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13" Namespace="calico-system" Pod="calico-kube-controllers-bfd8dc5f6-rbjmv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:32.548629 containerd[1707]: time="2026-01-17T00:29:32.548529358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:32.550870 containerd[1707]: time="2026-01-17T00:29:32.549339976Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:32.550870 containerd[1707]: time="2026-01-17T00:29:32.549365576Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:32.551283 containerd[1707]: time="2026-01-17T00:29:32.551178915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:32.575044 systemd[1]: Started cri-containerd-0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32.scope - libcontainer container 0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32. Jan 17 00:29:32.637416 containerd[1707]: time="2026-01-17T00:29:32.637368563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-jvj5r,Uid:87d77883-a4c9-44f4-bd4d-b065491724ef,Namespace:calico-system,Attempt:1,} returns sandbox id \"0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32\"" Jan 17 00:29:32.640128 containerd[1707]: time="2026-01-17T00:29:32.640098721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:29:32.719816 systemd-networkd[1582]: cali1cae6cf4ca6: Link UP Jan 17 00:29:32.722671 systemd-networkd[1582]: cali1cae6cf4ca6: Gained carrier Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.643 [INFO][5337] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0 coredns-66bc5c9577- kube-system d4d1f5cb-6ccd-4c1d-9961-16d6f6063290 974 0 2026-01-17 00:28:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-n-c809bb5d02 coredns-66bc5c9577-2cnwv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1cae6cf4ca6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" Namespace="kube-system" Pod="coredns-66bc5c9577-2cnwv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.643 [INFO][5337] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" Namespace="kube-system" Pod="coredns-66bc5c9577-2cnwv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.669 [INFO][5356] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" HandleID="k8s-pod-network.4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.669 [INFO][5356] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" HandleID="k8s-pod-network.4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-n-c809bb5d02", "pod":"coredns-66bc5c9577-2cnwv", "timestamp":"2026-01-17 00:29:32.669375449 +0000 UTC"}, Hostname:"ci-4081.3.6-n-c809bb5d02", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.669 [INFO][5356] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.669 [INFO][5356] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.669 [INFO][5356] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-c809bb5d02' Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.676 [INFO][5356] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.680 [INFO][5356] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.685 [INFO][5356] ipam/ipam.go 511: Trying affinity for 192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.688 [INFO][5356] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.690 [INFO][5356] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.690 [INFO][5356] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.693 [INFO][5356] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0 Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.698 [INFO][5356] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.712 [INFO][5356] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.199/26] block=192.168.61.192/26 handle="k8s-pod-network.4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.712 [INFO][5356] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.199/26] handle="k8s-pod-network.4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.712 [INFO][5356] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:32.759138 containerd[1707]: 2026-01-17 00:29:32.712 [INFO][5356] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.199/26] IPv6=[] ContainerID="4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" HandleID="k8s-pod-network.4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:32.761272 containerd[1707]: 2026-01-17 00:29:32.715 [INFO][5337] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" Namespace="kube-system" Pod="coredns-66bc5c9577-2cnwv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4d1f5cb-6ccd-4c1d-9961-16d6f6063290", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"", Pod:"coredns-66bc5c9577-2cnwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cae6cf4ca6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:32.761272 containerd[1707]: 2026-01-17 00:29:32.715 [INFO][5337] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.199/32] ContainerID="4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" Namespace="kube-system" Pod="coredns-66bc5c9577-2cnwv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:32.761272 containerd[1707]: 2026-01-17 00:29:32.715 [INFO][5337] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cae6cf4ca6 ContainerID="4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" Namespace="kube-system" Pod="coredns-66bc5c9577-2cnwv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:32.761272 containerd[1707]: 2026-01-17 00:29:32.723 [INFO][5337] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" Namespace="kube-system" Pod="coredns-66bc5c9577-2cnwv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:32.761272 containerd[1707]: 2026-01-17 00:29:32.725 [INFO][5337] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" Namespace="kube-system" Pod="coredns-66bc5c9577-2cnwv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4d1f5cb-6ccd-4c1d-9961-16d6f6063290", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0", Pod:"coredns-66bc5c9577-2cnwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cae6cf4ca6", MAC:"ea:87:53:03:19:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:32.762025 containerd[1707]: 2026-01-17 00:29:32.754 [INFO][5337] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0" Namespace="kube-system" Pod="coredns-66bc5c9577-2cnwv" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:32.769905 containerd[1707]: time="2026-01-17T00:29:32.769765701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:32.770035 containerd[1707]: time="2026-01-17T00:29:32.769991206Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:32.770460 containerd[1707]: time="2026-01-17T00:29:32.770408614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:32.770979 containerd[1707]: time="2026-01-17T00:29:32.770878925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:32.800035 systemd[1]: Started cri-containerd-abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13.scope - libcontainer container abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13. Jan 17 00:29:32.849145 containerd[1707]: time="2026-01-17T00:29:32.849098101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-bfd8dc5f6-rbjmv,Uid:4b45b454-ebe6-4d21-bf83-a7855971fc58,Namespace:calico-system,Attempt:1,} returns sandbox id \"abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13\"" Jan 17 00:29:32.889978 containerd[1707]: time="2026-01-17T00:29:32.889735972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:32.889978 containerd[1707]: time="2026-01-17T00:29:32.889798774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:32.890426 containerd[1707]: time="2026-01-17T00:29:32.889835675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:32.890426 containerd[1707]: time="2026-01-17T00:29:32.890088680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:32.911163 systemd[1]: Started cri-containerd-4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0.scope - libcontainer container 4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0. Jan 17 00:29:32.913885 containerd[1707]: time="2026-01-17T00:29:32.912924669Z" level=info msg="StopPodSandbox for \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\"" Jan 17 00:29:33.002532 containerd[1707]: time="2026-01-17T00:29:33.002483289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-2cnwv,Uid:d4d1f5cb-6ccd-4c1d-9961-16d6f6063290,Namespace:kube-system,Attempt:1,} returns sandbox id \"4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0\"" Jan 17 00:29:33.022534 containerd[1707]: time="2026-01-17T00:29:33.021721502Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:33.029729 containerd[1707]: time="2026-01-17T00:29:33.029665972Z" level=info msg="CreateContainer within sandbox \"4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:29:33.030436 containerd[1707]: time="2026-01-17T00:29:33.030309486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:29:33.030715 containerd[1707]: time="2026-01-17T00:29:33.030604492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:33.031165 kubelet[3212]: E0117 00:29:33.031046 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:29:33.031165 kubelet[3212]: E0117 00:29:33.031102 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:29:33.031606 containerd[1707]: time="2026-01-17T00:29:33.031436110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:29:33.033132 kubelet[3212]: E0117 00:29:33.031757 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-jvj5r_calico-system(87d77883-a4c9-44f4-bd4d-b065491724ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:33.033132 kubelet[3212]: E0117 00:29:33.031820 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:32.993 [INFO][5452] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:32.995 [INFO][5452] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" iface="eth0" netns="/var/run/netns/cni-6b26a586-f2d6-38e9-fd6d-d4537d742082" Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:32.995 [INFO][5452] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" iface="eth0" netns="/var/run/netns/cni-6b26a586-f2d6-38e9-fd6d-d4537d742082" Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:32.996 [INFO][5452] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" iface="eth0" netns="/var/run/netns/cni-6b26a586-f2d6-38e9-fd6d-d4537d742082" Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:32.996 [INFO][5452] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:32.996 [INFO][5452] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:33.071 [INFO][5475] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" HandleID="k8s-pod-network.d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:33.071 [INFO][5475] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:33.071 [INFO][5475] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:33.080 [WARNING][5475] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" HandleID="k8s-pod-network.d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:33.080 [INFO][5475] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" HandleID="k8s-pod-network.d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:33.082 [INFO][5475] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:33.086155 containerd[1707]: 2026-01-17 00:29:33.084 [INFO][5452] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:33.092559 containerd[1707]: time="2026-01-17T00:29:33.091878706Z" level=info msg="TearDown network for sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\" successfully" Jan 17 00:29:33.092559 containerd[1707]: time="2026-01-17T00:29:33.091926807Z" level=info msg="StopPodSandbox for \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\" returns successfully" Jan 17 00:29:33.093767 systemd[1]: run-netns-cni\x2d6b26a586\x2df2d6\x2d38e9\x2dfd6d\x2dd4537d742082.mount: Deactivated successfully. Jan 17 00:29:33.128192 containerd[1707]: time="2026-01-17T00:29:33.127992580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6749bfd78c-bw7sp,Uid:782804bf-2c9e-4b36-ac94-4d730923b45e,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:29:33.184474 systemd-networkd[1582]: calic500d4a688e: Link UP Jan 17 00:29:33.184720 systemd-networkd[1582]: calic500d4a688e: Gained carrier Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.073 [INFO][5466] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0 calico-apiserver-bdcd7994c- calico-apiserver dc357ba6-2c61-48b4-b7fe-5c77c584c2d0 997 0 2026-01-17 00:28:59 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:bdcd7994c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-c809bb5d02 calico-apiserver-bdcd7994c-plxvx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic500d4a688e [] [] }} ContainerID="f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" Namespace="calico-apiserver" Pod="calico-apiserver-bdcd7994c-plxvx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.074 [INFO][5466] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" Namespace="calico-apiserver" Pod="calico-apiserver-bdcd7994c-plxvx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.125 [INFO][5488] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" HandleID="k8s-pod-network.f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.126 [INFO][5488] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" HandleID="k8s-pod-network.f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5700), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-c809bb5d02", "pod":"calico-apiserver-bdcd7994c-plxvx", "timestamp":"2026-01-17 00:29:33.125148619 +0000 UTC"}, Hostname:"ci-4081.3.6-n-c809bb5d02", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.127 [INFO][5488] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.127 [INFO][5488] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.128 [INFO][5488] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-c809bb5d02' Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.139 [INFO][5488] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.149 [INFO][5488] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.153 [INFO][5488] ipam/ipam.go 511: Trying affinity for 192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.155 [INFO][5488] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.157 [INFO][5488] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.157 [INFO][5488] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.159 [INFO][5488] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.164 [INFO][5488] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.175 [INFO][5488] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.200/26] block=192.168.61.192/26 handle="k8s-pod-network.f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.176 [INFO][5488] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.200/26] handle="k8s-pod-network.f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.176 [INFO][5488] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:33.204191 containerd[1707]: 2026-01-17 00:29:33.176 [INFO][5488] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.200/26] IPv6=[] ContainerID="f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" HandleID="k8s-pod-network.f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:33.205127 containerd[1707]: 2026-01-17 00:29:33.178 [INFO][5466] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" Namespace="calico-apiserver" Pod="calico-apiserver-bdcd7994c-plxvx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0", GenerateName:"calico-apiserver-bdcd7994c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc357ba6-2c61-48b4-b7fe-5c77c584c2d0", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bdcd7994c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"", Pod:"calico-apiserver-bdcd7994c-plxvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic500d4a688e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:33.205127 containerd[1707]: 2026-01-17 00:29:33.179 [INFO][5466] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.200/32] ContainerID="f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" Namespace="calico-apiserver" Pod="calico-apiserver-bdcd7994c-plxvx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:33.205127 containerd[1707]: 2026-01-17 00:29:33.179 [INFO][5466] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic500d4a688e ContainerID="f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" Namespace="calico-apiserver" Pod="calico-apiserver-bdcd7994c-plxvx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:33.205127 containerd[1707]: 2026-01-17 00:29:33.185 [INFO][5466] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" Namespace="calico-apiserver" Pod="calico-apiserver-bdcd7994c-plxvx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:33.205127 containerd[1707]: 2026-01-17 00:29:33.186 [INFO][5466] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" Namespace="calico-apiserver" Pod="calico-apiserver-bdcd7994c-plxvx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0", GenerateName:"calico-apiserver-bdcd7994c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc357ba6-2c61-48b4-b7fe-5c77c584c2d0", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bdcd7994c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e", Pod:"calico-apiserver-bdcd7994c-plxvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic500d4a688e", MAC:"ee:da:c6:a5:db:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:33.205127 containerd[1707]: 2026-01-17 00:29:33.201 [INFO][5466] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e" Namespace="calico-apiserver" Pod="calico-apiserver-bdcd7994c-plxvx" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:33.242205 kubelet[3212]: E0117 00:29:33.242153 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:29:33.336057 containerd[1707]: time="2026-01-17T00:29:33.335991538Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:33.450090 containerd[1707]: time="2026-01-17T00:29:33.449578773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:33.451175 containerd[1707]: time="2026-01-17T00:29:33.451074205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:33.451175 containerd[1707]: time="2026-01-17T00:29:33.451101806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:33.451577 containerd[1707]: time="2026-01-17T00:29:33.451207408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:33.498308 systemd[1]: Started cri-containerd-f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e.scope - libcontainer container f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e. Jan 17 00:29:33.539183 containerd[1707]: time="2026-01-17T00:29:33.538982390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:29:33.541610 containerd[1707]: time="2026-01-17T00:29:33.539118493Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:29:33.542272 kubelet[3212]: E0117 00:29:33.542087 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:29:33.542272 kubelet[3212]: E0117 00:29:33.542179 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:29:33.542409 kubelet[3212]: E0117 00:29:33.542299 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-bfd8dc5f6-rbjmv_calico-system(4b45b454-ebe6-4d21-bf83-a7855971fc58): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:33.542409 kubelet[3212]: E0117 00:29:33.542348 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:29:33.552571 containerd[1707]: time="2026-01-17T00:29:33.552538380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-bdcd7994c-plxvx,Uid:dc357ba6-2c61-48b4-b7fe-5c77c584c2d0,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e\"" Jan 17 00:29:33.556353 containerd[1707]: time="2026-01-17T00:29:33.556225059Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:29:33.624028 systemd-networkd[1582]: cali1fba24b2fad: Gained IPv6LL Jan 17 00:29:33.978483 containerd[1707]: time="2026-01-17T00:29:33.978257106Z" level=info msg="CreateContainer within sandbox \"4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"55dc34b2d95f64bf2895a86f466d144b2e7aca5f68d14d55574999bc10671968\"" Jan 17 00:29:33.985033 containerd[1707]: time="2026-01-17T00:29:33.984632343Z" level=info msg="StartContainer for \"55dc34b2d95f64bf2895a86f466d144b2e7aca5f68d14d55574999bc10671968\"" Jan 17 00:29:34.030188 systemd[1]: Started cri-containerd-55dc34b2d95f64bf2895a86f466d144b2e7aca5f68d14d55574999bc10671968.scope - libcontainer container 55dc34b2d95f64bf2895a86f466d144b2e7aca5f68d14d55574999bc10671968. Jan 17 00:29:34.061922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount147513326.mount: Deactivated successfully. Jan 17 00:29:34.089649 systemd-networkd[1582]: calibd58a46584d: Link UP Jan 17 00:29:34.091002 systemd-networkd[1582]: calibd58a46584d: Gained carrier Jan 17 00:29:34.110347 containerd[1707]: time="2026-01-17T00:29:34.110294737Z" level=info msg="StartContainer for \"55dc34b2d95f64bf2895a86f466d144b2e7aca5f68d14d55574999bc10671968\" returns successfully" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:33.959 [INFO][5547] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0 calico-apiserver-6749bfd78c- calico-apiserver 782804bf-2c9e-4b36-ac94-4d730923b45e 1018 0 2026-01-17 00:28:58 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6749bfd78c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-n-c809bb5d02 calico-apiserver-6749bfd78c-bw7sp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibd58a46584d [] [] }} ContainerID="ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-bw7sp" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:33.960 [INFO][5547] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-bw7sp" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:33.999 [INFO][5559] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" HandleID="k8s-pod-network.ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.000 [INFO][5559] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" HandleID="k8s-pod-network.ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-n-c809bb5d02", "pod":"calico-apiserver-6749bfd78c-bw7sp", "timestamp":"2026-01-17 00:29:33.999824568 +0000 UTC"}, Hostname:"ci-4081.3.6-n-c809bb5d02", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.000 [INFO][5559] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.000 [INFO][5559] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.000 [INFO][5559] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-n-c809bb5d02' Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.015 [INFO][5559] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.036 [INFO][5559] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.043 [INFO][5559] ipam/ipam.go 511: Trying affinity for 192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.053 [INFO][5559] ipam/ipam.go 158: Attempting to load block cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.060 [INFO][5559] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.61.192/26 host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.060 [INFO][5559] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.61.192/26 handle="k8s-pod-network.ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.064 [INFO][5559] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279 Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.069 [INFO][5559] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.61.192/26 handle="k8s-pod-network.ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.082 [INFO][5559] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.61.201/26] block=192.168.61.192/26 handle="k8s-pod-network.ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.082 [INFO][5559] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.61.201/26] handle="k8s-pod-network.ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" host="ci-4081.3.6-n-c809bb5d02" Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.082 [INFO][5559] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:34.125911 containerd[1707]: 2026-01-17 00:29:34.082 [INFO][5559] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.61.201/26] IPv6=[] ContainerID="ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" HandleID="k8s-pod-network.ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:34.127129 containerd[1707]: 2026-01-17 00:29:34.085 [INFO][5547] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-bw7sp" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0", GenerateName:"calico-apiserver-6749bfd78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"782804bf-2c9e-4b36-ac94-4d730923b45e", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6749bfd78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"", Pod:"calico-apiserver-6749bfd78c-bw7sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd58a46584d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:34.127129 containerd[1707]: 2026-01-17 00:29:34.085 [INFO][5547] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.61.201/32] ContainerID="ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-bw7sp" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:34.127129 containerd[1707]: 2026-01-17 00:29:34.085 [INFO][5547] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd58a46584d ContainerID="ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-bw7sp" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:34.127129 containerd[1707]: 2026-01-17 00:29:34.088 [INFO][5547] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-bw7sp" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:34.127129 containerd[1707]: 2026-01-17 00:29:34.092 [INFO][5547] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-bw7sp" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0", GenerateName:"calico-apiserver-6749bfd78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"782804bf-2c9e-4b36-ac94-4d730923b45e", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6749bfd78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279", Pod:"calico-apiserver-6749bfd78c-bw7sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd58a46584d", MAC:"ba:f5:a7:ec:13:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:34.127129 containerd[1707]: 2026-01-17 00:29:34.123 [INFO][5547] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279" Namespace="calico-apiserver" Pod="calico-apiserver-6749bfd78c-bw7sp" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:34.128956 containerd[1707]: time="2026-01-17T00:29:34.128745832Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:34.133581 containerd[1707]: time="2026-01-17T00:29:34.132258507Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:29:34.133581 containerd[1707]: time="2026-01-17T00:29:34.133221728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:34.133961 kubelet[3212]: E0117 00:29:34.133553 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:34.133961 kubelet[3212]: E0117 00:29:34.133615 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:34.133961 kubelet[3212]: E0117 00:29:34.133721 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bdcd7994c-plxvx_calico-apiserver(dc357ba6-2c61-48b4-b7fe-5c77c584c2d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:34.133961 kubelet[3212]: E0117 00:29:34.133772 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:29:34.200379 containerd[1707]: time="2026-01-17T00:29:34.198088018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:29:34.200379 containerd[1707]: time="2026-01-17T00:29:34.198145020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:29:34.200379 containerd[1707]: time="2026-01-17T00:29:34.198158920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:34.200379 containerd[1707]: time="2026-01-17T00:29:34.198236722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:29:34.230046 systemd[1]: Started cri-containerd-ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279.scope - libcontainer container ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279. Jan 17 00:29:34.254198 kubelet[3212]: E0117 00:29:34.253829 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:29:34.254198 kubelet[3212]: E0117 00:29:34.253971 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:29:34.255184 kubelet[3212]: E0117 00:29:34.255041 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:29:34.266097 kubelet[3212]: I0117 00:29:34.265837 3212 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-2cnwv" podStartSLOduration=46.26581537 podStartE2EDuration="46.26581537s" podCreationTimestamp="2026-01-17 00:28:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:29:34.264454141 +0000 UTC m=+51.462890112" watchObservedRunningTime="2026-01-17 00:29:34.26581537 +0000 UTC m=+51.464251341" Jan 17 00:29:34.327523 containerd[1707]: time="2026-01-17T00:29:34.327112584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6749bfd78c-bw7sp,Uid:782804bf-2c9e-4b36-ac94-4d730923b45e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279\"" Jan 17 00:29:34.329645 systemd-networkd[1582]: cali60605130bbc: Gained IPv6LL Jan 17 00:29:34.332501 containerd[1707]: time="2026-01-17T00:29:34.332142092Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:29:34.456091 systemd-networkd[1582]: cali1cae6cf4ca6: Gained IPv6LL Jan 17 00:29:34.577799 containerd[1707]: time="2026-01-17T00:29:34.577735257Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:34.580942 containerd[1707]: time="2026-01-17T00:29:34.580886924Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:29:34.581064 containerd[1707]: time="2026-01-17T00:29:34.580990226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:34.581253 kubelet[3212]: E0117 00:29:34.581204 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:34.581344 kubelet[3212]: E0117 00:29:34.581270 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:34.581756 kubelet[3212]: E0117 00:29:34.581431 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6749bfd78c-bw7sp_calico-apiserver(782804bf-2c9e-4b36-ac94-4d730923b45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:34.581756 kubelet[3212]: E0117 00:29:34.581493 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:29:35.160265 systemd-networkd[1582]: calic500d4a688e: Gained IPv6LL Jan 17 00:29:35.257865 kubelet[3212]: E0117 00:29:35.257684 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:29:35.257865 kubelet[3212]: E0117 00:29:35.257771 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:29:35.544199 systemd-networkd[1582]: calibd58a46584d: Gained IPv6LL Jan 17 00:29:36.260380 kubelet[3212]: E0117 00:29:36.260137 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:29:37.912865 containerd[1707]: time="2026-01-17T00:29:37.912736446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:29:38.162520 containerd[1707]: time="2026-01-17T00:29:38.162456799Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:38.170903 containerd[1707]: time="2026-01-17T00:29:38.170617974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:29:38.170903 containerd[1707]: time="2026-01-17T00:29:38.170660875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:29:38.171089 kubelet[3212]: E0117 00:29:38.170954 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:29:38.171089 kubelet[3212]: E0117 00:29:38.171030 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:29:38.172941 kubelet[3212]: E0117 00:29:38.171197 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5dbd59f56d-n649m_calico-system(edba1a23-88e2-404b-a56f-6999060e2565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:38.173376 containerd[1707]: time="2026-01-17T00:29:38.173248331Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:29:38.443780 containerd[1707]: time="2026-01-17T00:29:38.443479123Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:38.446385 containerd[1707]: time="2026-01-17T00:29:38.446327085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:29:38.446540 containerd[1707]: time="2026-01-17T00:29:38.446346785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:29:38.446728 kubelet[3212]: E0117 00:29:38.446679 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:29:38.446825 kubelet[3212]: E0117 00:29:38.446746 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:29:38.446921 kubelet[3212]: E0117 00:29:38.446890 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5dbd59f56d-n649m_calico-system(edba1a23-88e2-404b-a56f-6999060e2565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:38.446991 kubelet[3212]: E0117 00:29:38.446961 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:29:42.923716 containerd[1707]: time="2026-01-17T00:29:42.923659800Z" level=info msg="StopPodSandbox for \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\"" Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.959 [WARNING][5675] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4d1f5cb-6ccd-4c1d-9961-16d6f6063290", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0", Pod:"coredns-66bc5c9577-2cnwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cae6cf4ca6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.960 [INFO][5675] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.960 [INFO][5675] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" iface="eth0" netns="" Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.960 [INFO][5675] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.960 [INFO][5675] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.984 [INFO][5682] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" HandleID="k8s-pod-network.f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.985 [INFO][5682] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.985 [INFO][5682] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.991 [WARNING][5682] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" HandleID="k8s-pod-network.f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.992 [INFO][5682] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" HandleID="k8s-pod-network.f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.993 [INFO][5682] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:42.995918 containerd[1707]: 2026-01-17 00:29:42.994 [INFO][5675] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:42.996599 containerd[1707]: time="2026-01-17T00:29:42.995996840Z" level=info msg="TearDown network for sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\" successfully" Jan 17 00:29:42.996599 containerd[1707]: time="2026-01-17T00:29:42.996079341Z" level=info msg="StopPodSandbox for \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\" returns successfully" Jan 17 00:29:42.997177 containerd[1707]: time="2026-01-17T00:29:42.997142164Z" level=info msg="RemovePodSandbox for \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\"" Jan 17 00:29:42.997323 containerd[1707]: time="2026-01-17T00:29:42.997181565Z" level=info msg="Forcibly stopping sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\"" Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.038 [WARNING][5696] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"d4d1f5cb-6ccd-4c1d-9961-16d6f6063290", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"4ebc1bc9e18ab35dfa20cbe219ed16c3e5f7d6f6e60bf07ac3bdc0245b6977d0", Pod:"coredns-66bc5c9577-2cnwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cae6cf4ca6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.038 [INFO][5696] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.038 [INFO][5696] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" iface="eth0" netns="" Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.038 [INFO][5696] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.038 [INFO][5696] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.061 [INFO][5703] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" HandleID="k8s-pod-network.f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.061 [INFO][5703] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.061 [INFO][5703] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.068 [WARNING][5703] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" HandleID="k8s-pod-network.f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.068 [INFO][5703] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" HandleID="k8s-pod-network.f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--2cnwv-eth0" Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.069 [INFO][5703] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:43.072770 containerd[1707]: 2026-01-17 00:29:43.071 [INFO][5696] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408" Jan 17 00:29:43.072770 containerd[1707]: time="2026-01-17T00:29:43.072613071Z" level=info msg="TearDown network for sandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\" successfully" Jan 17 00:29:43.083694 containerd[1707]: time="2026-01-17T00:29:43.083641906Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:29:43.083876 containerd[1707]: time="2026-01-17T00:29:43.083729508Z" level=info msg="RemovePodSandbox \"f5d69d4682d70695e75339453c46f2503294400b2e1b71e895d3aa604633e408\" returns successfully" Jan 17 00:29:43.084554 containerd[1707]: time="2026-01-17T00:29:43.084511024Z" level=info msg="StopPodSandbox for \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\"" Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.120 [WARNING][5717] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-whisker--6977ddfdf8--4s6xp-eth0" Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.120 [INFO][5717] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.120 [INFO][5717] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" iface="eth0" netns="" Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.120 [INFO][5717] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.120 [INFO][5717] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.142 [INFO][5725] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" HandleID="k8s-pod-network.0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--6977ddfdf8--4s6xp-eth0" Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.142 [INFO][5725] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.142 [INFO][5725] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.149 [WARNING][5725] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" HandleID="k8s-pod-network.0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--6977ddfdf8--4s6xp-eth0" Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.149 [INFO][5725] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" HandleID="k8s-pod-network.0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--6977ddfdf8--4s6xp-eth0" Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.150 [INFO][5725] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:43.153972 containerd[1707]: 2026-01-17 00:29:43.151 [INFO][5717] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:43.154770 containerd[1707]: time="2026-01-17T00:29:43.154696519Z" level=info msg="TearDown network for sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\" successfully" Jan 17 00:29:43.154770 containerd[1707]: time="2026-01-17T00:29:43.154756920Z" level=info msg="StopPodSandbox for \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\" returns successfully" Jan 17 00:29:43.155461 containerd[1707]: time="2026-01-17T00:29:43.155421334Z" level=info msg="RemovePodSandbox for \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\"" Jan 17 00:29:43.155831 containerd[1707]: time="2026-01-17T00:29:43.155618738Z" level=info msg="Forcibly stopping sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\"" Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.202 [WARNING][5739] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" WorkloadEndpoint="ci--4081.3.6--n--c809bb5d02-k8s-whisker--6977ddfdf8--4s6xp-eth0" Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.202 [INFO][5739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.202 [INFO][5739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" iface="eth0" netns="" Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.202 [INFO][5739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.202 [INFO][5739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.223 [INFO][5746] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" HandleID="k8s-pod-network.0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--6977ddfdf8--4s6xp-eth0" Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.224 [INFO][5746] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.224 [INFO][5746] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.230 [WARNING][5746] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" HandleID="k8s-pod-network.0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--6977ddfdf8--4s6xp-eth0" Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.231 [INFO][5746] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" HandleID="k8s-pod-network.0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Workload="ci--4081.3.6--n--c809bb5d02-k8s-whisker--6977ddfdf8--4s6xp-eth0" Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.232 [INFO][5746] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:43.235404 containerd[1707]: 2026-01-17 00:29:43.234 [INFO][5739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200" Jan 17 00:29:43.235404 containerd[1707]: time="2026-01-17T00:29:43.235368236Z" level=info msg="TearDown network for sandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\" successfully" Jan 17 00:29:43.243951 containerd[1707]: time="2026-01-17T00:29:43.243893718Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:29:43.244129 containerd[1707]: time="2026-01-17T00:29:43.243971020Z" level=info msg="RemovePodSandbox \"0b849b68610d0d0d27e9c6b601ff13fc631539b691ec0101ff592a1e477dc200\" returns successfully" Jan 17 00:29:43.244703 containerd[1707]: time="2026-01-17T00:29:43.244671235Z" level=info msg="StopPodSandbox for \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\"" Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.281 [WARNING][5760] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87d77883-a4c9-44f4-bd4d-b065491724ef", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32", Pod:"goldmane-7c778bb748-jvj5r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1fba24b2fad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.281 [INFO][5760] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.282 [INFO][5760] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" iface="eth0" netns="" Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.282 [INFO][5760] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.282 [INFO][5760] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.303 [INFO][5768] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" HandleID="k8s-pod-network.24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.303 [INFO][5768] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.303 [INFO][5768] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.309 [WARNING][5768] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" HandleID="k8s-pod-network.24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.309 [INFO][5768] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" HandleID="k8s-pod-network.24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.310 [INFO][5768] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:43.313465 containerd[1707]: 2026-01-17 00:29:43.312 [INFO][5760] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:43.314399 containerd[1707]: time="2026-01-17T00:29:43.313518000Z" level=info msg="TearDown network for sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\" successfully" Jan 17 00:29:43.314399 containerd[1707]: time="2026-01-17T00:29:43.313554501Z" level=info msg="StopPodSandbox for \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\" returns successfully" Jan 17 00:29:43.314399 containerd[1707]: time="2026-01-17T00:29:43.314312117Z" level=info msg="RemovePodSandbox for \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\"" Jan 17 00:29:43.314399 containerd[1707]: time="2026-01-17T00:29:43.314350318Z" level=info msg="Forcibly stopping sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\"" Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.347 [WARNING][5782] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"87d77883-a4c9-44f4-bd4d-b065491724ef", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"0399e395dbdfc2117950e039ee70ada1d776e6e4f4299b755023b853644fab32", Pod:"goldmane-7c778bb748-jvj5r", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.61.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1fba24b2fad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.348 [INFO][5782] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.348 [INFO][5782] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" iface="eth0" netns="" Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.348 [INFO][5782] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.348 [INFO][5782] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.372 [INFO][5789] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" HandleID="k8s-pod-network.24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.372 [INFO][5789] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.372 [INFO][5789] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.380 [WARNING][5789] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" HandleID="k8s-pod-network.24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.380 [INFO][5789] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" HandleID="k8s-pod-network.24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Workload="ci--4081.3.6--n--c809bb5d02-k8s-goldmane--7c778bb748--jvj5r-eth0" Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.382 [INFO][5789] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:43.385085 containerd[1707]: 2026-01-17 00:29:43.383 [INFO][5782] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8" Jan 17 00:29:43.386068 containerd[1707]: time="2026-01-17T00:29:43.385553234Z" level=info msg="TearDown network for sandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\" successfully" Jan 17 00:29:43.392864 containerd[1707]: time="2026-01-17T00:29:43.392670486Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:29:43.392864 containerd[1707]: time="2026-01-17T00:29:43.392744487Z" level=info msg="RemovePodSandbox \"24eb40b127bd6f761c085892158b4958f17f60a892277f7d169e7ca7e3d375d8\" returns successfully" Jan 17 00:29:43.393344 containerd[1707]: time="2026-01-17T00:29:43.393314599Z" level=info msg="StopPodSandbox for \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\"" Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.441 [WARNING][5804] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0", GenerateName:"calico-apiserver-6749bfd78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"782804bf-2c9e-4b36-ac94-4d730923b45e", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6749bfd78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279", Pod:"calico-apiserver-6749bfd78c-bw7sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd58a46584d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.442 [INFO][5804] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.442 [INFO][5804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" iface="eth0" netns="" Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.442 [INFO][5804] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.442 [INFO][5804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.462 [INFO][5811] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" HandleID="k8s-pod-network.d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.462 [INFO][5811] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.462 [INFO][5811] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.468 [WARNING][5811] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" HandleID="k8s-pod-network.d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.469 [INFO][5811] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" HandleID="k8s-pod-network.d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.471 [INFO][5811] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:43.473962 containerd[1707]: 2026-01-17 00:29:43.472 [INFO][5804] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:43.474641 containerd[1707]: time="2026-01-17T00:29:43.474026218Z" level=info msg="TearDown network for sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\" successfully" Jan 17 00:29:43.474641 containerd[1707]: time="2026-01-17T00:29:43.474064019Z" level=info msg="StopPodSandbox for \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\" returns successfully" Jan 17 00:29:43.474717 containerd[1707]: time="2026-01-17T00:29:43.474692532Z" level=info msg="RemovePodSandbox for \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\"" Jan 17 00:29:43.474760 containerd[1707]: time="2026-01-17T00:29:43.474732433Z" level=info msg="Forcibly stopping sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\"" Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.507 [WARNING][5825] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0", GenerateName:"calico-apiserver-6749bfd78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"782804bf-2c9e-4b36-ac94-4d730923b45e", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6749bfd78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"ac00666ec302fdd761df3689ff7453c19bd4854ec2b458b1758bfa1ef40f3279", Pod:"calico-apiserver-6749bfd78c-bw7sp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibd58a46584d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.507 [INFO][5825] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.507 [INFO][5825] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" iface="eth0" netns="" Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.507 [INFO][5825] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.507 [INFO][5825] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.531 [INFO][5832] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" HandleID="k8s-pod-network.d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.531 [INFO][5832] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.531 [INFO][5832] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.537 [WARNING][5832] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" HandleID="k8s-pod-network.d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.537 [INFO][5832] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" HandleID="k8s-pod-network.d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--bw7sp-eth0" Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.539 [INFO][5832] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:43.543394 containerd[1707]: 2026-01-17 00:29:43.540 [INFO][5825] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade" Jan 17 00:29:43.543394 containerd[1707]: time="2026-01-17T00:29:43.542000065Z" level=info msg="TearDown network for sandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\" successfully" Jan 17 00:29:43.550445 containerd[1707]: time="2026-01-17T00:29:43.550401544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:29:43.550558 containerd[1707]: time="2026-01-17T00:29:43.550475546Z" level=info msg="RemovePodSandbox \"d84bf0781f808ee02decc4ad165e60d6f6eecf2c418dab86f7f23c17b9272ade\" returns successfully" Jan 17 00:29:43.551286 containerd[1707]: time="2026-01-17T00:29:43.551255662Z" level=info msg="StopPodSandbox for \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\"" Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.584 [WARNING][5846] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0", GenerateName:"calico-apiserver-6749bfd78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f761b8ec-f7d8-4ff6-9483-963882f3f6d4", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6749bfd78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4", Pod:"calico-apiserver-6749bfd78c-xh4fx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2078bbd6c9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.584 [INFO][5846] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.584 [INFO][5846] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" iface="eth0" netns="" Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.584 [INFO][5846] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.584 [INFO][5846] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.608 [INFO][5853] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" HandleID="k8s-pod-network.14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.608 [INFO][5853] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.608 [INFO][5853] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.614 [WARNING][5853] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" HandleID="k8s-pod-network.14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.614 [INFO][5853] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" HandleID="k8s-pod-network.14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.616 [INFO][5853] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:43.618944 containerd[1707]: 2026-01-17 00:29:43.617 [INFO][5846] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:43.620496 containerd[1707]: time="2026-01-17T00:29:43.619005005Z" level=info msg="TearDown network for sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\" successfully" Jan 17 00:29:43.620496 containerd[1707]: time="2026-01-17T00:29:43.619040706Z" level=info msg="StopPodSandbox for \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\" returns successfully" Jan 17 00:29:43.620496 containerd[1707]: time="2026-01-17T00:29:43.619644919Z" level=info msg="RemovePodSandbox for \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\"" Jan 17 00:29:43.620496 containerd[1707]: time="2026-01-17T00:29:43.619680219Z" level=info msg="Forcibly stopping sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\"" Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.656 [WARNING][5868] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0", GenerateName:"calico-apiserver-6749bfd78c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f761b8ec-f7d8-4ff6-9483-963882f3f6d4", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6749bfd78c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"496931dc823d3abce4779cb2c9b6ee04ffba593e64e5d0df0cca7388e2591ac4", Pod:"calico-apiserver-6749bfd78c-xh4fx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2078bbd6c9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.656 [INFO][5868] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.656 [INFO][5868] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" iface="eth0" netns="" Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.656 [INFO][5868] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.656 [INFO][5868] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.677 [INFO][5875] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" HandleID="k8s-pod-network.14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.677 [INFO][5875] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.678 [INFO][5875] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.684 [WARNING][5875] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" HandleID="k8s-pod-network.14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.684 [INFO][5875] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" HandleID="k8s-pod-network.14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--6749bfd78c--xh4fx-eth0" Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.686 [INFO][5875] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:43.689120 containerd[1707]: 2026-01-17 00:29:43.687 [INFO][5868] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5" Jan 17 00:29:43.690880 containerd[1707]: time="2026-01-17T00:29:43.689700410Z" level=info msg="TearDown network for sandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\" successfully" Jan 17 00:29:43.696551 containerd[1707]: time="2026-01-17T00:29:43.696489655Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:29:43.696635 containerd[1707]: time="2026-01-17T00:29:43.696562956Z" level=info msg="RemovePodSandbox \"14afcc168598d8b0dd9819f3b6aed97372a761a6940c0f352ff52d60f0af26e5\" returns successfully" Jan 17 00:29:43.697171 containerd[1707]: time="2026-01-17T00:29:43.697143769Z" level=info msg="StopPodSandbox for \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\"" Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.732 [WARNING][5889] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47118e25-f9cc-45d1-87d8-eb13465b2075", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e", Pod:"csi-node-driver-7kvdv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05defc2f6b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.733 [INFO][5889] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.733 [INFO][5889] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" iface="eth0" netns="" Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.733 [INFO][5889] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.733 [INFO][5889] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.755 [INFO][5896] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" HandleID="k8s-pod-network.6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.756 [INFO][5896] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.756 [INFO][5896] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.763 [WARNING][5896] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" HandleID="k8s-pod-network.6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.763 [INFO][5896] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" HandleID="k8s-pod-network.6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.765 [INFO][5896] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:43.770607 containerd[1707]: 2026-01-17 00:29:43.768 [INFO][5889] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:43.772516 containerd[1707]: time="2026-01-17T00:29:43.770680434Z" level=info msg="TearDown network for sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\" successfully" Jan 17 00:29:43.772516 containerd[1707]: time="2026-01-17T00:29:43.770719335Z" level=info msg="StopPodSandbox for \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\" returns successfully" Jan 17 00:29:43.772516 containerd[1707]: time="2026-01-17T00:29:43.771508852Z" level=info msg="RemovePodSandbox for \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\"" Jan 17 00:29:43.772516 containerd[1707]: time="2026-01-17T00:29:43.771543653Z" level=info msg="Forcibly stopping sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\"" Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.842 [WARNING][5911] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47118e25-f9cc-45d1-87d8-eb13465b2075", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"e0a567263dca36d2d72ee5fa344f908b6b9717f40a0f251c315011848f644e0e", Pod:"csi-node-driver-7kvdv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.61.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali05defc2f6b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.842 [INFO][5911] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.842 [INFO][5911] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" iface="eth0" netns="" Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.842 [INFO][5911] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.842 [INFO][5911] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.895 [INFO][5918] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" HandleID="k8s-pod-network.6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.896 [INFO][5918] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.896 [INFO][5918] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.917 [WARNING][5918] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" HandleID="k8s-pod-network.6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.917 [INFO][5918] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" HandleID="k8s-pod-network.6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Workload="ci--4081.3.6--n--c809bb5d02-k8s-csi--node--driver--7kvdv-eth0" Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.919 [INFO][5918] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:43.924184 containerd[1707]: 2026-01-17 00:29:43.921 [INFO][5911] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38" Jan 17 00:29:43.924184 containerd[1707]: time="2026-01-17T00:29:43.924149602Z" level=info msg="TearDown network for sandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\" successfully" Jan 17 00:29:43.938295 containerd[1707]: time="2026-01-17T00:29:43.938065598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:29:43.938295 containerd[1707]: time="2026-01-17T00:29:43.938167101Z" level=info msg="RemovePodSandbox \"6d197115e9a98f45c4ce7dbe7695c83565a5b61252c86c95a8ed4ca248364b38\" returns successfully" Jan 17 00:29:43.939204 containerd[1707]: time="2026-01-17T00:29:43.938804514Z" level=info msg="StopPodSandbox for \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\"" Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:43.977 [WARNING][5932] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0", GenerateName:"calico-apiserver-bdcd7994c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc357ba6-2c61-48b4-b7fe-5c77c584c2d0", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bdcd7994c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e", Pod:"calico-apiserver-bdcd7994c-plxvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic500d4a688e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:43.977 [INFO][5932] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:43.977 [INFO][5932] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" iface="eth0" netns="" Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:43.977 [INFO][5932] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:43.977 [INFO][5932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:44.001 [INFO][5939] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" HandleID="k8s-pod-network.41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:44.001 [INFO][5939] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:44.001 [INFO][5939] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:44.007 [WARNING][5939] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" HandleID="k8s-pod-network.41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:44.007 [INFO][5939] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" HandleID="k8s-pod-network.41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:44.009 [INFO][5939] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:44.011821 containerd[1707]: 2026-01-17 00:29:44.010 [INFO][5932] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:44.012470 containerd[1707]: time="2026-01-17T00:29:44.011892170Z" level=info msg="TearDown network for sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\" successfully" Jan 17 00:29:44.012470 containerd[1707]: time="2026-01-17T00:29:44.011930671Z" level=info msg="StopPodSandbox for \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\" returns successfully" Jan 17 00:29:44.013094 containerd[1707]: time="2026-01-17T00:29:44.012572985Z" level=info msg="RemovePodSandbox for \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\"" Jan 17 00:29:44.013094 containerd[1707]: time="2026-01-17T00:29:44.012769989Z" level=info msg="Forcibly stopping sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\"" Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.055 [WARNING][5953] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0", GenerateName:"calico-apiserver-bdcd7994c-", Namespace:"calico-apiserver", SelfLink:"", UID:"dc357ba6-2c61-48b4-b7fe-5c77c584c2d0", ResourceVersion:"1073", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"bdcd7994c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"f862fa5074bb0d87d039ef3d122c9baec7bdee7d264a3d0760c90ba36364df6e", Pod:"calico-apiserver-bdcd7994c-plxvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.61.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic500d4a688e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.055 [INFO][5953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.055 [INFO][5953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" iface="eth0" netns="" Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.055 [INFO][5953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.055 [INFO][5953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.084 [INFO][5961] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" HandleID="k8s-pod-network.41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.084 [INFO][5961] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.084 [INFO][5961] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.091 [WARNING][5961] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" HandleID="k8s-pod-network.41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.091 [INFO][5961] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" HandleID="k8s-pod-network.41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--apiserver--bdcd7994c--plxvx-eth0" Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.093 [INFO][5961] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:44.096087 containerd[1707]: 2026-01-17 00:29:44.094 [INFO][5953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d" Jan 17 00:29:44.096756 containerd[1707]: time="2026-01-17T00:29:44.096152064Z" level=info msg="TearDown network for sandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\" successfully" Jan 17 00:29:44.104281 containerd[1707]: time="2026-01-17T00:29:44.104218836Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:29:44.104410 containerd[1707]: time="2026-01-17T00:29:44.104303338Z" level=info msg="RemovePodSandbox \"41bbe0734b529711449cf5fa71c573bb878e8efae3abc50377008de02ca49e1d\" returns successfully" Jan 17 00:29:44.104923 containerd[1707]: time="2026-01-17T00:29:44.104889150Z" level=info msg="StopPodSandbox for \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\"" Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.139 [WARNING][5975] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0", GenerateName:"calico-kube-controllers-bfd8dc5f6-", Namespace:"calico-system", SelfLink:"", UID:"4b45b454-ebe6-4d21-bf83-a7855971fc58", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bfd8dc5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13", Pod:"calico-kube-controllers-bfd8dc5f6-rbjmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali60605130bbc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.139 [INFO][5975] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.139 [INFO][5975] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" iface="eth0" netns="" Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.139 [INFO][5975] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.139 [INFO][5975] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.161 [INFO][5982] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" HandleID="k8s-pod-network.67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.161 [INFO][5982] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.162 [INFO][5982] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.167 [WARNING][5982] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" HandleID="k8s-pod-network.67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.167 [INFO][5982] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" HandleID="k8s-pod-network.67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.169 [INFO][5982] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:44.172076 containerd[1707]: 2026-01-17 00:29:44.170 [INFO][5975] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:44.172785 containerd[1707]: time="2026-01-17T00:29:44.172141382Z" level=info msg="TearDown network for sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\" successfully" Jan 17 00:29:44.172785 containerd[1707]: time="2026-01-17T00:29:44.172189983Z" level=info msg="StopPodSandbox for \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\" returns successfully" Jan 17 00:29:44.173104 containerd[1707]: time="2026-01-17T00:29:44.173068302Z" level=info msg="RemovePodSandbox for \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\"" Jan 17 00:29:44.173178 containerd[1707]: time="2026-01-17T00:29:44.173108403Z" level=info msg="Forcibly stopping sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\"" Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.213 [WARNING][5996] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0", GenerateName:"calico-kube-controllers-bfd8dc5f6-", Namespace:"calico-system", SelfLink:"", UID:"4b45b454-ebe6-4d21-bf83-a7855971fc58", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 29, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"bfd8dc5f6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"abc62c38f78ea34e825834fad62324475a5564dd45c536dcbb2c31c87cef6d13", Pod:"calico-kube-controllers-bfd8dc5f6-rbjmv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.61.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali60605130bbc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.213 [INFO][5996] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.213 [INFO][5996] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" iface="eth0" netns="" Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.213 [INFO][5996] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.213 [INFO][5996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.238 [INFO][6005] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" HandleID="k8s-pod-network.67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.238 [INFO][6005] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.238 [INFO][6005] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.244 [WARNING][6005] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" HandleID="k8s-pod-network.67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.244 [INFO][6005] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" HandleID="k8s-pod-network.67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Workload="ci--4081.3.6--n--c809bb5d02-k8s-calico--kube--controllers--bfd8dc5f6--rbjmv-eth0" Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.246 [INFO][6005] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:44.249701 containerd[1707]: 2026-01-17 00:29:44.247 [INFO][5996] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba" Jan 17 00:29:44.249701 containerd[1707]: time="2026-01-17T00:29:44.249660233Z" level=info msg="TearDown network for sandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\" successfully" Jan 17 00:29:44.259717 containerd[1707]: time="2026-01-17T00:29:44.259652446Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:29:44.259928 containerd[1707]: time="2026-01-17T00:29:44.259749448Z" level=info msg="RemovePodSandbox \"67a883d842650a8a9cd23d89e9bf3a8d563dcbb1b6f9fecfc4096e688a9c3bba\" returns successfully" Jan 17 00:29:44.260562 containerd[1707]: time="2026-01-17T00:29:44.260465663Z" level=info msg="StopPodSandbox for \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\"" Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.303 [WARNING][6019] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a13ebeb2-eb90-475a-98df-04917f3b6561", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573", Pod:"coredns-66bc5c9577-9p5lm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2e1f851abb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.303 [INFO][6019] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.303 [INFO][6019] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" iface="eth0" netns="" Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.303 [INFO][6019] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.303 [INFO][6019] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.327 [INFO][6026] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" HandleID="k8s-pod-network.a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.328 [INFO][6026] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.328 [INFO][6026] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.334 [WARNING][6026] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" HandleID="k8s-pod-network.a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.334 [INFO][6026] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" HandleID="k8s-pod-network.a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.336 [INFO][6026] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:44.340634 containerd[1707]: 2026-01-17 00:29:44.337 [INFO][6019] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:44.341282 containerd[1707]: time="2026-01-17T00:29:44.340741772Z" level=info msg="TearDown network for sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\" successfully" Jan 17 00:29:44.341282 containerd[1707]: time="2026-01-17T00:29:44.340782173Z" level=info msg="StopPodSandbox for \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\" returns successfully" Jan 17 00:29:44.342616 containerd[1707]: time="2026-01-17T00:29:44.342468609Z" level=info msg="RemovePodSandbox for \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\"" Jan 17 00:29:44.342616 containerd[1707]: time="2026-01-17T00:29:44.342526210Z" level=info msg="Forcibly stopping sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\"" Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.385 [WARNING][6040] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"a13ebeb2-eb90-475a-98df-04917f3b6561", ResourceVersion:"985", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 28, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-n-c809bb5d02", ContainerID:"6fce65c45e23dc0e682f6a8383977d1dbdb7eb709acbf0e5235ede2a3c317573", Pod:"coredns-66bc5c9577-9p5lm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.61.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid2e1f851abb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.385 [INFO][6040] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.385 [INFO][6040] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" iface="eth0" netns="" Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.385 [INFO][6040] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.385 [INFO][6040] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.408 [INFO][6048] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" HandleID="k8s-pod-network.a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.409 [INFO][6048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.409 [INFO][6048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.415 [WARNING][6048] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" HandleID="k8s-pod-network.a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.415 [INFO][6048] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" HandleID="k8s-pod-network.a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Workload="ci--4081.3.6--n--c809bb5d02-k8s-coredns--66bc5c9577--9p5lm-eth0" Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.417 [INFO][6048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:29:44.419956 containerd[1707]: 2026-01-17 00:29:44.418 [INFO][6040] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd" Jan 17 00:29:44.420815 containerd[1707]: time="2026-01-17T00:29:44.420024360Z" level=info msg="TearDown network for sandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\" successfully" Jan 17 00:29:44.427823 containerd[1707]: time="2026-01-17T00:29:44.427763325Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:29:44.427986 containerd[1707]: time="2026-01-17T00:29:44.427860927Z" level=info msg="RemovePodSandbox \"a235e30ea84e1633eb79cb0f99935d609480c383cca766b7d80a0eecfc2e37cd\" returns successfully" Jan 17 00:29:44.914763 containerd[1707]: time="2026-01-17T00:29:44.913766773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:29:45.173560 containerd[1707]: time="2026-01-17T00:29:45.173383801Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:45.176752 containerd[1707]: time="2026-01-17T00:29:45.176692871Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:29:45.176906 containerd[1707]: time="2026-01-17T00:29:45.176736672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:29:45.177123 kubelet[3212]: E0117 00:29:45.177077 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:29:45.177546 kubelet[3212]: E0117 00:29:45.177136 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:29:45.177546 kubelet[3212]: E0117 00:29:45.177256 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7kvdv_calico-system(47118e25-f9cc-45d1-87d8-eb13465b2075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:45.179785 containerd[1707]: time="2026-01-17T00:29:45.179738536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:29:45.420414 containerd[1707]: time="2026-01-17T00:29:45.420346059Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:45.423221 containerd[1707]: time="2026-01-17T00:29:45.423171819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:29:45.423340 containerd[1707]: time="2026-01-17T00:29:45.423281122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:29:45.423559 kubelet[3212]: E0117 00:29:45.423513 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:29:45.424212 kubelet[3212]: E0117 00:29:45.423575 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:29:45.424212 kubelet[3212]: E0117 00:29:45.423693 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7kvdv_calico-system(47118e25-f9cc-45d1-87d8-eb13465b2075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:45.424212 kubelet[3212]: E0117 00:29:45.423759 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:29:45.913964 containerd[1707]: time="2026-01-17T00:29:45.913840567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:29:46.162068 containerd[1707]: time="2026-01-17T00:29:46.161755645Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:46.167373 containerd[1707]: time="2026-01-17T00:29:46.164915613Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:29:46.167373 containerd[1707]: time="2026-01-17T00:29:46.165044315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:46.167566 kubelet[3212]: E0117 00:29:46.165405 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:46.167566 kubelet[3212]: E0117 00:29:46.165469 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:46.167566 kubelet[3212]: E0117 00:29:46.165713 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6749bfd78c-xh4fx_calico-apiserver(f761b8ec-f7d8-4ff6-9483-963882f3f6d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:46.167566 kubelet[3212]: E0117 00:29:46.165774 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:29:46.168155 containerd[1707]: time="2026-01-17T00:29:46.168123581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:29:46.409106 containerd[1707]: time="2026-01-17T00:29:46.409045411Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:46.411770 containerd[1707]: time="2026-01-17T00:29:46.411714668Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:29:46.411986 containerd[1707]: time="2026-01-17T00:29:46.411759168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:46.412091 kubelet[3212]: E0117 00:29:46.412022 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:46.412421 kubelet[3212]: E0117 00:29:46.412101 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:46.412421 kubelet[3212]: E0117 00:29:46.412212 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bdcd7994c-plxvx_calico-apiserver(dc357ba6-2c61-48b4-b7fe-5c77c584c2d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:46.412421 kubelet[3212]: E0117 00:29:46.412275 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:29:48.915900 containerd[1707]: time="2026-01-17T00:29:48.915235704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:29:49.156456 containerd[1707]: time="2026-01-17T00:29:49.156390519Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:49.160334 containerd[1707]: time="2026-01-17T00:29:49.160256203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:29:49.160334 containerd[1707]: time="2026-01-17T00:29:49.160303604Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:29:49.160613 kubelet[3212]: E0117 00:29:49.160559 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:29:49.161123 kubelet[3212]: E0117 00:29:49.160625 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:29:49.161123 kubelet[3212]: E0117 00:29:49.160903 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-bfd8dc5f6-rbjmv_calico-system(4b45b454-ebe6-4d21-bf83-a7855971fc58): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:49.161123 kubelet[3212]: E0117 00:29:49.160960 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:29:49.161965 containerd[1707]: time="2026-01-17T00:29:49.161931839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:29:49.405955 containerd[1707]: time="2026-01-17T00:29:49.405887816Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:49.408912 containerd[1707]: time="2026-01-17T00:29:49.408835879Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:29:49.409159 containerd[1707]: time="2026-01-17T00:29:49.408881680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:49.409267 kubelet[3212]: E0117 00:29:49.409212 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:29:49.409336 kubelet[3212]: E0117 00:29:49.409283 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:29:49.409422 kubelet[3212]: E0117 00:29:49.409395 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-jvj5r_calico-system(87d77883-a4c9-44f4-bd4d-b065491724ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:49.409469 kubelet[3212]: E0117 00:29:49.409446 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:29:50.914959 containerd[1707]: time="2026-01-17T00:29:50.914887453Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:29:50.916298 kubelet[3212]: E0117 00:29:50.915481 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:29:51.167225 containerd[1707]: time="2026-01-17T00:29:51.167046507Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:29:51.171612 containerd[1707]: time="2026-01-17T00:29:51.171553104Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:29:51.171836 containerd[1707]: time="2026-01-17T00:29:51.171593505Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:29:51.171967 kubelet[3212]: E0117 00:29:51.171914 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:51.172036 kubelet[3212]: E0117 00:29:51.171985 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:29:51.172129 kubelet[3212]: E0117 00:29:51.172100 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6749bfd78c-bw7sp_calico-apiserver(782804bf-2c9e-4b36-ac94-4d730923b45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:29:51.172189 kubelet[3212]: E0117 00:29:51.172151 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:29:58.915877 kubelet[3212]: E0117 00:29:58.915435 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:29:59.915553 kubelet[3212]: E0117 00:29:59.915473 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:29:59.915819 kubelet[3212]: E0117 00:29:59.915556 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:30:00.915060 kubelet[3212]: E0117 00:30:00.913946 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:30:03.912391 kubelet[3212]: E0117 00:30:03.912301 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:30:05.915344 kubelet[3212]: E0117 00:30:05.914568 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:30:05.917179 containerd[1707]: time="2026-01-17T00:30:05.916733653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:30:06.175181 containerd[1707]: time="2026-01-17T00:30:06.174834328Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:06.177958 containerd[1707]: time="2026-01-17T00:30:06.177742687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:30:06.177958 containerd[1707]: time="2026-01-17T00:30:06.177891690Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:30:06.179081 kubelet[3212]: E0117 00:30:06.178363 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:30:06.179081 kubelet[3212]: E0117 00:30:06.178435 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:30:06.179081 kubelet[3212]: E0117 00:30:06.178544 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5dbd59f56d-n649m_calico-system(edba1a23-88e2-404b-a56f-6999060e2565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:06.181140 containerd[1707]: time="2026-01-17T00:30:06.181108454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:30:06.431409 containerd[1707]: time="2026-01-17T00:30:06.431063666Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:06.436559 containerd[1707]: time="2026-01-17T00:30:06.435473954Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:30:06.436559 containerd[1707]: time="2026-01-17T00:30:06.435614057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:30:06.436792 kubelet[3212]: E0117 00:30:06.435832 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:30:06.436792 kubelet[3212]: E0117 00:30:06.435955 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:30:06.436792 kubelet[3212]: E0117 00:30:06.436053 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5dbd59f56d-n649m_calico-system(edba1a23-88e2-404b-a56f-6999060e2565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:06.437031 kubelet[3212]: E0117 00:30:06.436105 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:30:12.916366 containerd[1707]: time="2026-01-17T00:30:12.916153335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:30:13.165588 containerd[1707]: time="2026-01-17T00:30:13.165309583Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:13.168242 containerd[1707]: time="2026-01-17T00:30:13.168078743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:30:13.168876 containerd[1707]: time="2026-01-17T00:30:13.168479352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:30:13.168991 kubelet[3212]: E0117 00:30:13.168718 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:30:13.168991 kubelet[3212]: E0117 00:30:13.168776 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:30:13.168991 kubelet[3212]: E0117 00:30:13.168884 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-jvj5r_calico-system(87d77883-a4c9-44f4-bd4d-b065491724ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:13.168991 kubelet[3212]: E0117 00:30:13.168929 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:30:13.916870 containerd[1707]: time="2026-01-17T00:30:13.916798514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:30:14.170042 containerd[1707]: time="2026-01-17T00:30:14.169456238Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:14.172261 containerd[1707]: time="2026-01-17T00:30:14.172036795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:30:14.172261 containerd[1707]: time="2026-01-17T00:30:14.172088396Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:30:14.172681 kubelet[3212]: E0117 00:30:14.172453 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:30:14.172681 kubelet[3212]: E0117 00:30:14.172516 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:30:14.173798 kubelet[3212]: E0117 00:30:14.173751 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6749bfd78c-xh4fx_calico-apiserver(f761b8ec-f7d8-4ff6-9483-963882f3f6d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:14.173908 kubelet[3212]: E0117 00:30:14.173829 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:30:14.939322 containerd[1707]: time="2026-01-17T00:30:14.939013065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:30:15.200462 containerd[1707]: time="2026-01-17T00:30:15.200284077Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:15.203589 containerd[1707]: time="2026-01-17T00:30:15.203520048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:30:15.203735 containerd[1707]: time="2026-01-17T00:30:15.203553649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:30:15.203972 kubelet[3212]: E0117 00:30:15.203918 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:30:15.205146 kubelet[3212]: E0117 00:30:15.203992 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:30:15.205146 kubelet[3212]: E0117 00:30:15.204263 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bdcd7994c-plxvx_calico-apiserver(dc357ba6-2c61-48b4-b7fe-5c77c584c2d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:15.205146 kubelet[3212]: E0117 00:30:15.204316 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:30:15.205813 containerd[1707]: time="2026-01-17T00:30:15.205615694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:30:15.466775 containerd[1707]: time="2026-01-17T00:30:15.466607701Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:15.470776 containerd[1707]: time="2026-01-17T00:30:15.470700890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:30:15.470950 containerd[1707]: time="2026-01-17T00:30:15.470831693Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:30:15.472866 kubelet[3212]: E0117 00:30:15.471123 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:30:15.472866 kubelet[3212]: E0117 00:30:15.471182 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:30:15.472866 kubelet[3212]: E0117 00:30:15.471377 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7kvdv_calico-system(47118e25-f9cc-45d1-87d8-eb13465b2075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:15.473826 containerd[1707]: time="2026-01-17T00:30:15.473786158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:30:15.718322 containerd[1707]: time="2026-01-17T00:30:15.718141500Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:15.721717 containerd[1707]: time="2026-01-17T00:30:15.721648577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:30:15.721906 containerd[1707]: time="2026-01-17T00:30:15.721784380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:30:15.722471 kubelet[3212]: E0117 00:30:15.722135 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:30:15.722471 kubelet[3212]: E0117 00:30:15.722202 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:30:15.722471 kubelet[3212]: E0117 00:30:15.722308 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7kvdv_calico-system(47118e25-f9cc-45d1-87d8-eb13465b2075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:15.722699 kubelet[3212]: E0117 00:30:15.722362 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:30:17.917390 kubelet[3212]: E0117 00:30:17.917327 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:30:18.916482 containerd[1707]: time="2026-01-17T00:30:18.916429231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:30:19.166242 containerd[1707]: time="2026-01-17T00:30:19.166177795Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:19.170890 containerd[1707]: time="2026-01-17T00:30:19.170717194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:30:19.171026 containerd[1707]: time="2026-01-17T00:30:19.170896198Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:30:19.171143 kubelet[3212]: E0117 00:30:19.171090 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:30:19.171569 kubelet[3212]: E0117 00:30:19.171163 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:30:19.171569 kubelet[3212]: E0117 00:30:19.171284 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-bfd8dc5f6-rbjmv_calico-system(4b45b454-ebe6-4d21-bf83-a7855971fc58): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:19.171569 kubelet[3212]: E0117 00:30:19.171329 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:30:19.915468 containerd[1707]: time="2026-01-17T00:30:19.915317503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:30:20.179558 containerd[1707]: time="2026-01-17T00:30:20.179131082Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:20.184193 containerd[1707]: time="2026-01-17T00:30:20.183988188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:30:20.184193 containerd[1707]: time="2026-01-17T00:30:20.184130591Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:30:20.186644 kubelet[3212]: E0117 00:30:20.184601 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:30:20.186644 kubelet[3212]: E0117 00:30:20.184663 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:30:20.186644 kubelet[3212]: E0117 00:30:20.184764 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6749bfd78c-bw7sp_calico-apiserver(782804bf-2c9e-4b36-ac94-4d730923b45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:20.186644 kubelet[3212]: E0117 00:30:20.184806 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:30:23.820042 update_engine[1686]: I20260117 00:30:23.819970 1686 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 17 00:30:23.820042 update_engine[1686]: I20260117 00:30:23.820034 1686 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 17 00:30:23.820723 update_engine[1686]: I20260117 00:30:23.820266 1686 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 17 00:30:23.821393 update_engine[1686]: I20260117 00:30:23.821359 1686 omaha_request_params.cc:62] Current group set to lts Jan 17 00:30:23.821689 update_engine[1686]: I20260117 00:30:23.821514 1686 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 17 00:30:23.821689 update_engine[1686]: I20260117 00:30:23.821537 1686 update_attempter.cc:643] Scheduling an action processor start. Jan 17 00:30:23.821689 update_engine[1686]: I20260117 00:30:23.821558 1686 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:30:23.821689 update_engine[1686]: I20260117 00:30:23.821594 1686 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 17 00:30:23.821689 update_engine[1686]: I20260117 00:30:23.821672 1686 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:30:23.821689 update_engine[1686]: I20260117 00:30:23.821683 1686 omaha_request_action.cc:272] Request: Jan 17 00:30:23.821689 update_engine[1686]: Jan 17 00:30:23.821689 update_engine[1686]: Jan 17 00:30:23.821689 update_engine[1686]: Jan 17 00:30:23.821689 update_engine[1686]: Jan 17 00:30:23.821689 update_engine[1686]: Jan 17 00:30:23.821689 update_engine[1686]: Jan 17 00:30:23.821689 update_engine[1686]: Jan 17 00:30:23.821689 update_engine[1686]: Jan 17 00:30:23.822275 update_engine[1686]: I20260117 00:30:23.821694 1686 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:30:23.825378 update_engine[1686]: I20260117 00:30:23.824775 1686 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:30:23.825378 update_engine[1686]: I20260117 00:30:23.825327 1686 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:30:23.826109 locksmithd[1723]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 17 00:30:23.984839 update_engine[1686]: E20260117 00:30:23.984766 1686 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:30:23.985028 update_engine[1686]: I20260117 00:30:23.984908 1686 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 17 00:30:25.246456 systemd[1]: run-containerd-runc-k8s.io-8e04231e283e94714f3bfd7195c487b8545949453031dbcb3518090e0a361325-runc.JQs8Xo.mount: Deactivated successfully. Jan 17 00:30:27.914446 kubelet[3212]: E0117 00:30:27.914377 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:30:27.917095 kubelet[3212]: E0117 00:30:27.917034 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:30:27.917305 kubelet[3212]: E0117 00:30:27.917176 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:30:28.903978 systemd[1]: Started sshd@7-10.200.8.17:22-10.200.16.10:33214.service - OpenSSH per-connection server daemon (10.200.16.10:33214). Jan 17 00:30:29.557870 sshd[6129]: Accepted publickey for core from 10.200.16.10 port 33214 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:30:29.558856 sshd[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:29.566426 systemd-logind[1685]: New session 10 of user core. Jan 17 00:30:29.573084 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:30:29.914997 kubelet[3212]: E0117 00:30:29.914228 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:30:30.186668 sshd[6129]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:30.191964 systemd[1]: sshd@7-10.200.8.17:22-10.200.16.10:33214.service: Deactivated successfully. Jan 17 00:30:30.199230 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:30:30.203385 systemd-logind[1685]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:30:30.204940 systemd-logind[1685]: Removed session 10. Jan 17 00:30:30.920079 kubelet[3212]: E0117 00:30:30.917770 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:30:30.920759 kubelet[3212]: E0117 00:30:30.920666 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:30:33.774380 update_engine[1686]: I20260117 00:30:33.774291 1686 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:30:33.774949 update_engine[1686]: I20260117 00:30:33.774610 1686 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:30:33.775014 update_engine[1686]: I20260117 00:30:33.774940 1686 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:30:33.807301 update_engine[1686]: E20260117 00:30:33.807222 1686 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:30:33.807455 update_engine[1686]: I20260117 00:30:33.807332 1686 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 17 00:30:33.913592 kubelet[3212]: E0117 00:30:33.913141 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:30:35.308501 systemd[1]: Started sshd@8-10.200.8.17:22-10.200.16.10:52798.service - OpenSSH per-connection server daemon (10.200.16.10:52798). Jan 17 00:30:35.967394 sshd[6144]: Accepted publickey for core from 10.200.16.10 port 52798 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:30:35.969232 sshd[6144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:35.973969 systemd-logind[1685]: New session 11 of user core. Jan 17 00:30:35.978082 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:30:36.492287 sshd[6144]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:36.500644 systemd[1]: sshd@8-10.200.8.17:22-10.200.16.10:52798.service: Deactivated successfully. Jan 17 00:30:36.505350 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:30:36.506525 systemd-logind[1685]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:30:36.508190 systemd-logind[1685]: Removed session 11. Jan 17 00:30:39.912753 kubelet[3212]: E0117 00:30:39.912659 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:30:40.915360 kubelet[3212]: E0117 00:30:40.914872 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:30:41.610250 systemd[1]: Started sshd@9-10.200.8.17:22-10.200.16.10:60204.service - OpenSSH per-connection server daemon (10.200.16.10:60204). Jan 17 00:30:41.916432 kubelet[3212]: E0117 00:30:41.915890 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:30:42.246998 sshd[6158]: Accepted publickey for core from 10.200.16.10 port 60204 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:30:42.249491 sshd[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:42.257679 systemd-logind[1685]: New session 12 of user core. Jan 17 00:30:42.263307 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:30:42.828973 sshd[6158]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:42.836777 systemd[1]: sshd@9-10.200.8.17:22-10.200.16.10:60204.service: Deactivated successfully. Jan 17 00:30:42.843607 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:30:42.848211 systemd-logind[1685]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:30:42.850496 systemd-logind[1685]: Removed session 12. Jan 17 00:30:42.914809 kubelet[3212]: E0117 00:30:42.914133 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:30:42.915069 kubelet[3212]: E0117 00:30:42.914959 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:30:43.775382 update_engine[1686]: I20260117 00:30:43.771929 1686 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:30:43.775382 update_engine[1686]: I20260117 00:30:43.772261 1686 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:30:43.775382 update_engine[1686]: I20260117 00:30:43.772557 1686 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:30:43.788328 update_engine[1686]: E20260117 00:30:43.788170 1686 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:30:43.788328 update_engine[1686]: I20260117 00:30:43.788290 1686 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 17 00:30:43.913996 kubelet[3212]: E0117 00:30:43.913920 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:30:46.916247 kubelet[3212]: E0117 00:30:46.915041 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:30:47.944161 systemd[1]: Started sshd@10-10.200.8.17:22-10.200.16.10:60220.service - OpenSSH per-connection server daemon (10.200.16.10:60220). Jan 17 00:30:48.586631 sshd[6179]: Accepted publickey for core from 10.200.16.10 port 60220 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:30:48.591356 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:48.602668 systemd-logind[1685]: New session 13 of user core. Jan 17 00:30:48.605056 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:30:48.853349 waagent[1916]: 2026-01-17T00:30:48.852376Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 2] Jan 17 00:30:48.861974 waagent[1916]: 2026-01-17T00:30:48.861512Z INFO ExtHandler Jan 17 00:30:48.861974 waagent[1916]: 2026-01-17T00:30:48.861696Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: a4f5e151-bb0b-4c32-8b46-d2c912c2f4eb eTag: 5468858411275541734 source: Fabric] Jan 17 00:30:48.863367 waagent[1916]: 2026-01-17T00:30:48.862620Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 00:30:48.864089 waagent[1916]: 2026-01-17T00:30:48.864030Z INFO ExtHandler Jan 17 00:30:48.864334 waagent[1916]: 2026-01-17T00:30:48.864272Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 2] Jan 17 00:30:48.922721 waagent[1916]: 2026-01-17T00:30:48.922651Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 00:30:49.028872 waagent[1916]: 2026-01-17T00:30:49.026509Z INFO ExtHandler Downloaded certificate {'thumbprint': '5FCE0F32B5674435F7508D2DC907DC1B6CC3CBBA', 'hasPrivateKey': True} Jan 17 00:30:49.029896 waagent[1916]: 2026-01-17T00:30:49.029522Z INFO ExtHandler Fetch goal state completed Jan 17 00:30:49.030350 waagent[1916]: 2026-01-17T00:30:49.030279Z INFO ExtHandler ExtHandler Jan 17 00:30:49.030542 waagent[1916]: 2026-01-17T00:30:49.030502Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_2 channel: WireServer source: Fabric activity: 8ac2aafe-f897-4799-b617-996386be983b correlation 8b0516a6-c703-4d14-bdb3-677b645002ff created: 2026-01-17T00:30:38.471436Z] Jan 17 00:30:49.031987 waagent[1916]: 2026-01-17T00:30:49.031102Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 00:30:49.031987 waagent[1916]: 2026-01-17T00:30:49.031787Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_2 1 ms] Jan 17 00:30:49.197301 sshd[6179]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:49.203495 systemd[1]: sshd@10-10.200.8.17:22-10.200.16.10:60220.service: Deactivated successfully. Jan 17 00:30:49.207029 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:30:49.210891 systemd-logind[1685]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:30:49.213447 systemd-logind[1685]: Removed session 13. Jan 17 00:30:49.320628 systemd[1]: Started sshd@11-10.200.8.17:22-10.200.16.10:60236.service - OpenSSH per-connection server daemon (10.200.16.10:60236). Jan 17 00:30:49.972495 sshd[6200]: Accepted publickey for core from 10.200.16.10 port 60236 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:30:49.976138 sshd[6200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:49.982692 systemd-logind[1685]: New session 14 of user core. Jan 17 00:30:49.993040 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:30:50.590120 sshd[6200]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:50.597317 systemd-logind[1685]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:30:50.600435 systemd[1]: sshd@11-10.200.8.17:22-10.200.16.10:60236.service: Deactivated successfully. Jan 17 00:30:50.603665 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:30:50.608194 systemd-logind[1685]: Removed session 14. Jan 17 00:30:50.711180 systemd[1]: Started sshd@12-10.200.8.17:22-10.200.16.10:34416.service - OpenSSH per-connection server daemon (10.200.16.10:34416). Jan 17 00:30:50.917488 kubelet[3212]: E0117 00:30:50.916686 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:30:51.368033 sshd[6211]: Accepted publickey for core from 10.200.16.10 port 34416 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:30:51.371657 sshd[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:51.378322 systemd-logind[1685]: New session 15 of user core. Jan 17 00:30:51.385041 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:30:51.908953 sshd[6211]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:51.914307 systemd[1]: sshd@12-10.200.8.17:22-10.200.16.10:34416.service: Deactivated successfully. Jan 17 00:30:51.918342 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:30:51.921622 systemd-logind[1685]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:30:51.923584 systemd-logind[1685]: Removed session 15. Jan 17 00:30:53.772995 update_engine[1686]: I20260117 00:30:53.772898 1686 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:30:53.773510 update_engine[1686]: I20260117 00:30:53.773247 1686 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:30:53.773596 update_engine[1686]: I20260117 00:30:53.773561 1686 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:30:53.806232 update_engine[1686]: E20260117 00:30:53.806160 1686 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:30:53.806426 update_engine[1686]: I20260117 00:30:53.806267 1686 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:30:53.806426 update_engine[1686]: I20260117 00:30:53.806279 1686 omaha_request_action.cc:617] Omaha request response: Jan 17 00:30:53.806426 update_engine[1686]: E20260117 00:30:53.806391 1686 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 17 00:30:53.806426 update_engine[1686]: I20260117 00:30:53.806422 1686 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 17 00:30:53.806585 update_engine[1686]: I20260117 00:30:53.806430 1686 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:30:53.806585 update_engine[1686]: I20260117 00:30:53.806437 1686 update_attempter.cc:306] Processing Done. Jan 17 00:30:53.806585 update_engine[1686]: E20260117 00:30:53.806459 1686 update_attempter.cc:619] Update failed. Jan 17 00:30:53.806585 update_engine[1686]: I20260117 00:30:53.806468 1686 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 17 00:30:53.806585 update_engine[1686]: I20260117 00:30:53.806476 1686 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 17 00:30:53.806585 update_engine[1686]: I20260117 00:30:53.806485 1686 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 17 00:30:53.807247 locksmithd[1723]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 17 00:30:53.807706 update_engine[1686]: I20260117 00:30:53.807268 1686 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 17 00:30:53.807706 update_engine[1686]: I20260117 00:30:53.807323 1686 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 17 00:30:53.807706 update_engine[1686]: I20260117 00:30:53.807333 1686 omaha_request_action.cc:272] Request: Jan 17 00:30:53.807706 update_engine[1686]: Jan 17 00:30:53.807706 update_engine[1686]: Jan 17 00:30:53.807706 update_engine[1686]: Jan 17 00:30:53.807706 update_engine[1686]: Jan 17 00:30:53.807706 update_engine[1686]: Jan 17 00:30:53.807706 update_engine[1686]: Jan 17 00:30:53.807706 update_engine[1686]: I20260117 00:30:53.807343 1686 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 17 00:30:53.807706 update_engine[1686]: I20260117 00:30:53.807569 1686 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 17 00:30:53.808133 update_engine[1686]: I20260117 00:30:53.807825 1686 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 17 00:30:53.823871 update_engine[1686]: E20260117 00:30:53.822315 1686 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 17 00:30:53.823871 update_engine[1686]: I20260117 00:30:53.822402 1686 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 17 00:30:53.823871 update_engine[1686]: I20260117 00:30:53.822412 1686 omaha_request_action.cc:617] Omaha request response: Jan 17 00:30:53.823871 update_engine[1686]: I20260117 00:30:53.822423 1686 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:30:53.823871 update_engine[1686]: I20260117 00:30:53.822430 1686 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 17 00:30:53.823871 update_engine[1686]: I20260117 00:30:53.822436 1686 update_attempter.cc:306] Processing Done. Jan 17 00:30:53.823871 update_engine[1686]: I20260117 00:30:53.822446 1686 update_attempter.cc:310] Error event sent. Jan 17 00:30:53.823871 update_engine[1686]: I20260117 00:30:53.822458 1686 update_check_scheduler.cc:74] Next update check in 41m37s Jan 17 00:30:53.824252 locksmithd[1723]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 17 00:30:53.913469 containerd[1707]: time="2026-01-17T00:30:53.913057158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:30:54.175545 containerd[1707]: time="2026-01-17T00:30:54.175260584Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:54.177923 containerd[1707]: time="2026-01-17T00:30:54.177775940Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:30:54.178227 containerd[1707]: time="2026-01-17T00:30:54.177855041Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:30:54.180593 kubelet[3212]: E0117 00:30:54.178427 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:30:54.180593 kubelet[3212]: E0117 00:30:54.178488 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:30:54.180593 kubelet[3212]: E0117 00:30:54.178587 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-jvj5r_calico-system(87d77883-a4c9-44f4-bd4d-b065491724ef): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:54.180593 kubelet[3212]: E0117 00:30:54.178632 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:30:55.912982 containerd[1707]: time="2026-01-17T00:30:55.912934389Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:30:56.171824 containerd[1707]: time="2026-01-17T00:30:56.171493733Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:56.175023 containerd[1707]: time="2026-01-17T00:30:56.174897809Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:30:56.175023 containerd[1707]: time="2026-01-17T00:30:56.174947210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:30:56.176102 kubelet[3212]: E0117 00:30:56.175349 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:30:56.176102 kubelet[3212]: E0117 00:30:56.175399 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:30:56.176102 kubelet[3212]: E0117 00:30:56.175470 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-7kvdv_calico-system(47118e25-f9cc-45d1-87d8-eb13465b2075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:56.178344 containerd[1707]: time="2026-01-17T00:30:56.178051279Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:30:56.431240 containerd[1707]: time="2026-01-17T00:30:56.431054500Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:56.434513 containerd[1707]: time="2026-01-17T00:30:56.434375974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:30:56.434513 containerd[1707]: time="2026-01-17T00:30:56.434422675Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:30:56.434735 kubelet[3212]: E0117 00:30:56.434648 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:30:56.434735 kubelet[3212]: E0117 00:30:56.434705 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:30:56.434974 kubelet[3212]: E0117 00:30:56.434801 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-7kvdv_calico-system(47118e25-f9cc-45d1-87d8-eb13465b2075): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:56.434974 kubelet[3212]: E0117 00:30:56.434877 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:30:56.919672 containerd[1707]: time="2026-01-17T00:30:56.917777313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:30:56.921215 kubelet[3212]: E0117 00:30:56.917169 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:30:57.028958 systemd[1]: Started sshd@13-10.200.8.17:22-10.200.16.10:34432.service - OpenSSH per-connection server daemon (10.200.16.10:34432). Jan 17 00:30:57.164545 containerd[1707]: time="2026-01-17T00:30:57.164472394Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:57.167641 containerd[1707]: time="2026-01-17T00:30:57.167432560Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:30:57.167641 containerd[1707]: time="2026-01-17T00:30:57.167570963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:30:57.168040 kubelet[3212]: E0117 00:30:57.167988 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:30:57.168149 kubelet[3212]: E0117 00:30:57.168059 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:30:57.168797 kubelet[3212]: E0117 00:30:57.168230 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5dbd59f56d-n649m_calico-system(edba1a23-88e2-404b-a56f-6999060e2565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:57.171050 containerd[1707]: time="2026-01-17T00:30:57.170292623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:30:57.422430 containerd[1707]: time="2026-01-17T00:30:57.421688408Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:57.426061 containerd[1707]: time="2026-01-17T00:30:57.425983504Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:30:57.426248 containerd[1707]: time="2026-01-17T00:30:57.426123407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:30:57.426504 kubelet[3212]: E0117 00:30:57.426444 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:30:57.429090 kubelet[3212]: E0117 00:30:57.426520 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:30:57.429090 kubelet[3212]: E0117 00:30:57.426819 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-bdcd7994c-plxvx_calico-apiserver(dc357ba6-2c61-48b4-b7fe-5c77c584c2d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:57.429090 kubelet[3212]: E0117 00:30:57.426917 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:30:57.429283 containerd[1707]: time="2026-01-17T00:30:57.428544861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:30:57.670304 containerd[1707]: time="2026-01-17T00:30:57.670052526Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:30:57.673420 containerd[1707]: time="2026-01-17T00:30:57.672988991Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:30:57.673420 containerd[1707]: time="2026-01-17T00:30:57.673108394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:30:57.674642 kubelet[3212]: E0117 00:30:57.674448 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:30:57.674642 kubelet[3212]: E0117 00:30:57.674531 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:30:57.675671 kubelet[3212]: E0117 00:30:57.675628 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5dbd59f56d-n649m_calico-system(edba1a23-88e2-404b-a56f-6999060e2565): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:30:57.675787 kubelet[3212]: E0117 00:30:57.675725 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:30:57.678011 sshd[6250]: Accepted publickey for core from 10.200.16.10 port 34432 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:30:57.681520 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:30:57.688654 systemd-logind[1685]: New session 16 of user core. Jan 17 00:30:57.697328 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:30:58.203825 sshd[6250]: pam_unix(sshd:session): session closed for user core Jan 17 00:30:58.210252 systemd-logind[1685]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:30:58.211256 systemd[1]: sshd@13-10.200.8.17:22-10.200.16.10:34432.service: Deactivated successfully. Jan 17 00:30:58.217385 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:30:58.221454 systemd-logind[1685]: Removed session 16. Jan 17 00:31:01.915355 containerd[1707]: time="2026-01-17T00:31:01.915020134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:31:02.171983 containerd[1707]: time="2026-01-17T00:31:02.171308426Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:02.177620 containerd[1707]: time="2026-01-17T00:31:02.177413359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:31:02.177620 containerd[1707]: time="2026-01-17T00:31:02.177552062Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:31:02.178156 kubelet[3212]: E0117 00:31:02.178084 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:02.178690 kubelet[3212]: E0117 00:31:02.178170 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:02.179397 kubelet[3212]: E0117 00:31:02.179252 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6749bfd78c-bw7sp_calico-apiserver(782804bf-2c9e-4b36-ac94-4d730923b45e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:02.179565 kubelet[3212]: E0117 00:31:02.179437 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:31:03.328997 systemd[1]: Started sshd@14-10.200.8.17:22-10.200.16.10:34918.service - OpenSSH per-connection server daemon (10.200.16.10:34918). Jan 17 00:31:03.982425 sshd[6285]: Accepted publickey for core from 10.200.16.10 port 34918 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:31:03.985325 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:03.998631 systemd-logind[1685]: New session 17 of user core. Jan 17 00:31:04.002274 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:31:04.521449 sshd[6285]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:04.525253 systemd[1]: sshd@14-10.200.8.17:22-10.200.16.10:34918.service: Deactivated successfully. Jan 17 00:31:04.527773 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:31:04.529656 systemd-logind[1685]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:31:04.530941 systemd-logind[1685]: Removed session 17. Jan 17 00:31:05.912445 containerd[1707]: time="2026-01-17T00:31:05.912334452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:31:06.150698 containerd[1707]: time="2026-01-17T00:31:06.150630652Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:06.154104 containerd[1707]: time="2026-01-17T00:31:06.154041926Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:31:06.154247 containerd[1707]: time="2026-01-17T00:31:06.154066727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:31:06.154487 kubelet[3212]: E0117 00:31:06.154427 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:06.155032 kubelet[3212]: E0117 00:31:06.154505 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:31:06.155032 kubelet[3212]: E0117 00:31:06.154604 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-6749bfd78c-xh4fx_calico-apiserver(f761b8ec-f7d8-4ff6-9483-963882f3f6d4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:06.155032 kubelet[3212]: E0117 00:31:06.154653 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:31:06.940314 kubelet[3212]: E0117 00:31:06.940243 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:31:08.913769 kubelet[3212]: E0117 00:31:08.913059 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:31:09.644572 systemd[1]: Started sshd@15-10.200.8.17:22-10.200.16.10:54156.service - OpenSSH per-connection server daemon (10.200.16.10:54156). Jan 17 00:31:09.915068 kubelet[3212]: E0117 00:31:09.914883 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:31:10.287872 sshd[6297]: Accepted publickey for core from 10.200.16.10 port 54156 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:31:10.291366 sshd[6297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:10.297631 systemd-logind[1685]: New session 18 of user core. Jan 17 00:31:10.305030 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:31:10.809885 sshd[6297]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:10.815508 systemd-logind[1685]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:31:10.817291 systemd[1]: sshd@15-10.200.8.17:22-10.200.16.10:54156.service: Deactivated successfully. Jan 17 00:31:10.822639 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:31:10.827016 systemd-logind[1685]: Removed session 18. Jan 17 00:31:10.918819 containerd[1707]: time="2026-01-17T00:31:10.918172568Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:31:10.930623 kubelet[3212]: E0117 00:31:10.930177 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:31:10.933928 systemd[1]: Started sshd@16-10.200.8.17:22-10.200.16.10:54158.service - OpenSSH per-connection server daemon (10.200.16.10:54158). Jan 17 00:31:11.181534 containerd[1707]: time="2026-01-17T00:31:11.181197931Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:31:11.184215 containerd[1707]: time="2026-01-17T00:31:11.184040192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:31:11.184215 containerd[1707]: time="2026-01-17T00:31:11.184073092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:31:11.184422 kubelet[3212]: E0117 00:31:11.184360 3212 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:31:11.184488 kubelet[3212]: E0117 00:31:11.184420 3212 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:31:11.186872 kubelet[3212]: E0117 00:31:11.184524 3212 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-bfd8dc5f6-rbjmv_calico-system(4b45b454-ebe6-4d21-bf83-a7855971fc58): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:31:11.186872 kubelet[3212]: E0117 00:31:11.184573 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:31:11.589934 sshd[6309]: Accepted publickey for core from 10.200.16.10 port 54158 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:31:11.592410 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:11.602270 systemd-logind[1685]: New session 19 of user core. Jan 17 00:31:11.611043 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:31:12.174446 sshd[6309]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:12.179125 systemd[1]: sshd@16-10.200.8.17:22-10.200.16.10:54158.service: Deactivated successfully. Jan 17 00:31:12.181851 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:31:12.182783 systemd-logind[1685]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:31:12.183957 systemd-logind[1685]: Removed session 19. Jan 17 00:31:12.292193 systemd[1]: Started sshd@17-10.200.8.17:22-10.200.16.10:54168.service - OpenSSH per-connection server daemon (10.200.16.10:54168). Jan 17 00:31:12.926025 sshd[6320]: Accepted publickey for core from 10.200.16.10 port 54168 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:31:12.928412 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:12.934978 systemd-logind[1685]: New session 20 of user core. Jan 17 00:31:12.943055 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:31:14.264120 sshd[6320]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:14.269300 systemd[1]: sshd@17-10.200.8.17:22-10.200.16.10:54168.service: Deactivated successfully. Jan 17 00:31:14.273369 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:31:14.276584 systemd-logind[1685]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:31:14.278481 systemd-logind[1685]: Removed session 20. Jan 17 00:31:14.383054 systemd[1]: Started sshd@18-10.200.8.17:22-10.200.16.10:54178.service - OpenSSH per-connection server daemon (10.200.16.10:54178). Jan 17 00:31:15.036564 sshd[6336]: Accepted publickey for core from 10.200.16.10 port 54178 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:31:15.039425 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:15.045620 systemd-logind[1685]: New session 21 of user core. Jan 17 00:31:15.054033 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:31:15.801582 sshd[6336]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:15.807992 systemd[1]: sshd@18-10.200.8.17:22-10.200.16.10:54178.service: Deactivated successfully. Jan 17 00:31:15.812575 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:31:15.813886 systemd-logind[1685]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:31:15.815337 systemd-logind[1685]: Removed session 21. Jan 17 00:31:15.918370 systemd[1]: Started sshd@19-10.200.8.17:22-10.200.16.10:54190.service - OpenSSH per-connection server daemon (10.200.16.10:54190). Jan 17 00:31:16.565300 sshd[6349]: Accepted publickey for core from 10.200.16.10 port 54190 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:31:16.568953 sshd[6349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:16.579113 systemd-logind[1685]: New session 22 of user core. Jan 17 00:31:16.583087 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:31:17.133147 sshd[6349]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:17.137761 systemd-logind[1685]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:31:17.140622 systemd[1]: sshd@19-10.200.8.17:22-10.200.16.10:54190.service: Deactivated successfully. Jan 17 00:31:17.144113 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:31:17.146317 systemd-logind[1685]: Removed session 22. Jan 17 00:31:17.913909 kubelet[3212]: E0117 00:31:17.913005 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:31:20.921643 kubelet[3212]: E0117 00:31:20.920185 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:31:20.921643 kubelet[3212]: E0117 00:31:20.920669 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:31:21.914238 kubelet[3212]: E0117 00:31:21.913264 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:31:21.915901 kubelet[3212]: E0117 00:31:21.914792 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:31:21.916120 kubelet[3212]: E0117 00:31:21.915808 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:31:22.261794 systemd[1]: Started sshd@20-10.200.8.17:22-10.200.16.10:51170.service - OpenSSH per-connection server daemon (10.200.16.10:51170). Jan 17 00:31:22.913130 sshd[6366]: Accepted publickey for core from 10.200.16.10 port 51170 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:31:22.917005 sshd[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:22.929395 systemd-logind[1685]: New session 23 of user core. Jan 17 00:31:22.936047 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:31:23.453999 sshd[6366]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:23.458739 systemd[1]: sshd@20-10.200.8.17:22-10.200.16.10:51170.service: Deactivated successfully. Jan 17 00:31:23.459231 systemd-logind[1685]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:31:23.463246 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:31:23.465782 systemd-logind[1685]: Removed session 23. Jan 17 00:31:23.915874 kubelet[3212]: E0117 00:31:23.915784 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:31:25.228648 systemd[1]: run-containerd-runc-k8s.io-8e04231e283e94714f3bfd7195c487b8545949453031dbcb3518090e0a361325-runc.IvSmL3.mount: Deactivated successfully. Jan 17 00:31:28.575963 systemd[1]: Started sshd@21-10.200.8.17:22-10.200.16.10:51178.service - OpenSSH per-connection server daemon (10.200.16.10:51178). Jan 17 00:31:29.220748 sshd[6400]: Accepted publickey for core from 10.200.16.10 port 51178 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:31:29.224175 sshd[6400]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:29.234197 systemd-logind[1685]: New session 24 of user core. Jan 17 00:31:29.238297 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:31:29.772173 sshd[6400]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:29.775381 systemd[1]: sshd@21-10.200.8.17:22-10.200.16.10:51178.service: Deactivated successfully. Jan 17 00:31:29.777972 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:31:29.779711 systemd-logind[1685]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:31:29.781878 systemd-logind[1685]: Removed session 24. Jan 17 00:31:31.914956 kubelet[3212]: E0117 00:31:31.913326 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:31:32.914515 kubelet[3212]: E0117 00:31:32.914280 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:31:32.918423 kubelet[3212]: E0117 00:31:32.918365 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:31:32.920214 kubelet[3212]: E0117 00:31:32.919830 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:31:32.920214 kubelet[3212]: E0117 00:31:32.920027 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:31:34.893204 systemd[1]: Started sshd@22-10.200.8.17:22-10.200.16.10:34516.service - OpenSSH per-connection server daemon (10.200.16.10:34516). Jan 17 00:31:35.535906 sshd[6414]: Accepted publickey for core from 10.200.16.10 port 34516 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:31:35.538535 sshd[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:35.544669 systemd-logind[1685]: New session 25 of user core. Jan 17 00:31:35.554158 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:31:36.094182 sshd[6414]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:36.099927 systemd-logind[1685]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:31:36.101202 systemd[1]: sshd@22-10.200.8.17:22-10.200.16.10:34516.service: Deactivated successfully. Jan 17 00:31:36.105825 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:31:36.109435 systemd-logind[1685]: Removed session 25. Jan 17 00:31:36.920212 kubelet[3212]: E0117 00:31:36.920135 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef" Jan 17 00:31:38.918770 kubelet[3212]: E0117 00:31:38.918339 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-bfd8dc5f6-rbjmv" podUID="4b45b454-ebe6-4d21-bf83-a7855971fc58" Jan 17 00:31:41.215191 systemd[1]: Started sshd@23-10.200.8.17:22-10.200.16.10:49174.service - OpenSSH per-connection server daemon (10.200.16.10:49174). Jan 17 00:31:41.857678 sshd[6426]: Accepted publickey for core from 10.200.16.10 port 49174 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:31:41.861334 sshd[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:41.870028 systemd-logind[1685]: New session 26 of user core. Jan 17 00:31:41.879275 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:31:42.369219 sshd[6426]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:42.373814 systemd-logind[1685]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:31:42.375102 systemd[1]: sshd@23-10.200.8.17:22-10.200.16.10:49174.service: Deactivated successfully. Jan 17 00:31:42.377523 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:31:42.378966 systemd-logind[1685]: Removed session 26. Jan 17 00:31:43.913206 kubelet[3212]: E0117 00:31:43.913108 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-bdcd7994c-plxvx" podUID="dc357ba6-2c61-48b4-b7fe-5c77c584c2d0" Jan 17 00:31:44.915872 kubelet[3212]: E0117 00:31:44.915187 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-xh4fx" podUID="f761b8ec-f7d8-4ff6-9483-963882f3f6d4" Jan 17 00:31:45.912278 kubelet[3212]: E0117 00:31:45.912228 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6749bfd78c-bw7sp" podUID="782804bf-2c9e-4b36-ac94-4d730923b45e" Jan 17 00:31:46.914907 kubelet[3212]: E0117 00:31:46.914830 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-7kvdv" podUID="47118e25-f9cc-45d1-87d8-eb13465b2075" Jan 17 00:31:47.485616 systemd[1]: Started sshd@24-10.200.8.17:22-10.200.16.10:49186.service - OpenSSH per-connection server daemon (10.200.16.10:49186). Jan 17 00:31:47.915298 kubelet[3212]: E0117 00:31:47.915237 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5dbd59f56d-n649m" podUID="edba1a23-88e2-404b-a56f-6999060e2565" Jan 17 00:31:48.134080 sshd[6440]: Accepted publickey for core from 10.200.16.10 port 49186 ssh2: RSA SHA256:C4ZtjmC/KSmMP5NjLuEVGKvVADEA1jeiPQ/CKjwUsgE Jan 17 00:31:48.137314 sshd[6440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:31:48.146357 systemd-logind[1685]: New session 27 of user core. Jan 17 00:31:48.152088 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 00:31:48.694605 sshd[6440]: pam_unix(sshd:session): session closed for user core Jan 17 00:31:48.702260 systemd-logind[1685]: Session 27 logged out. Waiting for processes to exit. Jan 17 00:31:48.702541 systemd[1]: sshd@24-10.200.8.17:22-10.200.16.10:49186.service: Deactivated successfully. Jan 17 00:31:48.706394 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 00:31:48.709478 systemd-logind[1685]: Removed session 27. Jan 17 00:31:48.915458 kubelet[3212]: E0117 00:31:48.914894 3212 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-jvj5r" podUID="87d77883-a4c9-44f4-bd4d-b065491724ef"